R2WinBUGS error (no prior specified for this initial value) - winbugs
The original WinBUGS code is as follows:
model { for (i in 1:n) {for (j in 1:J) {y[i,j] <- equals(D[i],j)
D[i] ~ dcat(p[i,])
p[i,j] <- phi[i,j] / sum(phi[i,])
LL[i,j] <- y[i,j]*log(p[i,j])}
for (j in 2:J) {
S.ed[i,j] <- c1[j]*ed.c[i]+equals(degree,2)*c2[j]*ed.c[i]*ed.c[i]+inprod(delta.c[j,],Spline[i,])
log(phi[i,j]) <- beta0[j] + beta1[j]*white[i] + beta2[j]*exp.c[i] + S.ed[i,j]}
LLt[i] <- sum(LL[i,])
phi[i,1] <- 1
H[i] <- 1/exp(LLt[i])
exp.c[i] <- exp[i]-mean(exp[])
ed.c[i] <- ed[i]-mean(ed[])}
# Priors
beta0[1] <- 0; beta1[1] <- 0; beta2[1] <- 0; c1[1] <- 0; c2[1] <- 0
for (j in 2:J) {beta1[j] ~ dnorm(0,0.0001)
beta2[j] ~ dnorm(0,0.0001)
c1[j] ~ dnorm(0,0.0001)
c2[j] ~ dnorm(0,0.0001)
beta0[j] ~ dnorm(0,0.0001)}
Dv <- -2*sum(LLt[])
for (k in 1:K) {knot.c[k] <- knot[k]-mean(ed[])
# degree =1 for linear spline, =2 for quadratic spline
for (i in 1:337) { S[i,k] <- (ed.c[i]-knot.c[k])*step(ed[i]-knot[k])
Spline[i,k] <- pow(S[i,k],degree)}}
# Random spline coefficients
for (j in 2:J) {for (k in 1:K) {delta[j,k] ~ dnorm(0,tau[j]); delta.c[j,k] <- delta[j,k]-mean(delta[j,])}
# full conditionals for spline precision
tau[j] ~ dgamma(As[j],Bs[j]); As[j] <- 0.1 + K/2; Bs[j] <- 0.1 + inprod(delta.c[j,],delta.c[j,])/2}}
*INITIS*
list(beta1=c(NA,0,0,0,0),beta2=c(NA,0,0,0,0),c1=c(NA,0,0,0,0),
c2=c(NA,0,0,0,0),tau=c(NA,1,1,1,1),
beta0=c(NA,0,0,0,0),delta=structure(.Data=c(NA,NA,NA,NA,NA,NA,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),.Dim=c(5,6)))
list(beta1=c(NA,-0.5,0.5,0,0),beta2=c(NA,-0.5,0.5,0,0),
c1=c(NA,-0.5,0.5,0,0),c2=c(NA,-0.5,0.5,0,0),tau=c(NA,1,1,1,1),
beta0=c(NA,-0.5,0.5,0,0),delta=structure(.Data=c(NA,NA,NA,NA,NA,NA,
-0.5,0.5,-0.5,0.5,-0.5,0.5,-0.5,0.5,-0.5,0.5,-0.5,0.5,-0.5,
0.5,-0.5,0.5,-0.5,0.5,-0.5,0.5,-0.5,0.5,0,0),.Dim=c(5,6)))
*DATA*
my own R2WinBUGS code is like this.
rm(list=ls())
setwd("C:/Users/~~/BMCD")
Data <- read.table("model604data.txt")
D<-Data[,1]; exp<-Data[,2]; ed<-Data[,3]; white<-Data[,4]; J=5; K=6; n=337; knot=c(9.6,12,13.6,14,16,16.4); degree=2;
data <- list(D=D, exp=exp, ed=ed, white=white, J=J, K=K, n=n, knot=knot, degree=degree)
parameters <- c("beta0","beta1","beta2")
inits =list(list(beta1=c(NA,0,0,0,0),beta2=c(NA,0,0,0,0),c1=c(NA,0,0,0,0),c2=c(NA,0,0,0,0),tau=c(NA,1,1,1,1),beta0=c(NA,0,0,0,0),delta=structure(.Data=c(NA,NA,NA,NA,NA,NA,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),.Dim=c(5,6))),
list(beta1=c(NA,-0.5,0.5,0,0),beta2=c(NA,-0.5,0.5,0,0),c1=c(NA,-0.5,0.5,0,0),c2=c(NA,-0.5,0.5,0,0),tau=c(NA,1,1,1,1),beta0=c(NA,-0.5,0.5,0,0),delta=structure(.Data=c(NA,NA,NA,NA,NA,NA,
-0.5,0.5,-0.5,0.5,-0.5,0.5,-0.5,0.5,-0.5,0.5,-0.5,0.5,-0.5,0.5,-0.5,0.5,-0.5,0.5,-0.5,0.5,-0.5,0.5,0,0),.Dim=c(5,6))) )
model604 <- bugs(data, inits, parameters,model.file="C:/Users/~~/model604.odc",
debug=TRUE,n.chains=2, n.iter=2000, n.burnin=500, bugs.directory="C:/Program Files/WinBUGS14/" )'
How can I fix this error?
I tried to change NA in inits to 0, but it didn't work and give me another error message.
The data follows rectangular format but it's alright
the syntax translation for model and data will be okay, I guess.
Should I add the prior condition?
Related
Random intercept and slope model with correlation and complex within individual variability in Winbugs
I am trying to implement a random intercept and slope with complex within error variability in WinBUGS. I was getting "expected multivariate node". Here is a sample code with the error message. Any help would be much appreciated. Please let me know if there is need for further clarification. model { for (i in 1:N) { y[i,1:2] ~ dnorm(mu[i,1:2],prec[i,1:2]) mu[i,1] <- alpha[1] + beta[1]*(wave[i]-1) + b[id[i],1] + u[id[i],1]*(wave[i]-1) mu[i,2] <- alpha[2] + beta[2]*(wave[i]-1) + b[id[i],2] + u[id[i],2]*(wave[i]-1) prec[i,1] <- 1 / exp(delta1[1] + delta2[1] * (wave[i]-1) + b[id[i], 3]) prec[i,2] <- 1 / exp(delta1[2] + delta2[2] * (wave[i]-1) + b[id[i], 4]) } alpha[1] ~ dnorm(0,0.000001) alpha[2] ~ dnorm(0,0.000001) beta[1] ~ dnorm(0,0.000001) beta[2] ~ dnorm(0,0.000001) delta1[1] ~ dnorm(0,0.000001) delta1[2] ~ dnorm(0,0.000001) delta2[1] ~ dnorm(0,0.000001) delta2[2] ~ dnorm(0,0.000001) for (i in 1:Nid) { b[i,1:4] ~ dmnorm(m1[1:4], prec1[1:4,1:4]) u[i,1:2] ~ dmnorm(m2[1:2], prec2[1:2,1:2]) } # Priors for random terms prec1[1:4,1:4] ~ dwish(R1[1:4,1:4], 4) sigma1[1:4,1:4] <- inverse(prec1[1:4,1:4]) R1[1,1] <- 0.00001 R1[2,2] <- 0.00001 R1[3,3] <- 0.00001 R1[4,4] <- 0.00001 R1[1,2] <- 0 R1[1,3] <- 0 R1[1,4] <- 0 R1[2,3] <- 0 R1[2,4] <- 0 R1[3,4] <- 0 S1[1:4,1:4] <- inverse(prec1[1:4,1:4]) prec2[1:2,1:2] ~ dwish(R1[1:2,1:2], 2) sigma2[1:2,1:2] <- inverse(prec2[1:2,1:2]) R2[1,1] <- 0.0001 R2[2,2] <- 0.0001 R2[1,2] <- 0 R2[2,1] <- 0 S2[1:2,1:2] <- inverse(prec2[1:2,1:2]) } }}
I think the problem is actually in the first line of the first for() loop: y[i,1:2] ~ dnorm(mu[i,1:2],prec[i,1:2]) You're saying here that the first two columns of the ith row of y have a distribution. The distribution would have to have as many dimensions as there are values, so WinBUGS is looking for a bivariate node here (~dmnorm(mu[i,1:2], prec[1:2,1:2])). Alternatively, you could separate them: y[i,1] ~ dnorm(mu[i,1], prec[i,1]) y[i,2] ~ dnorm(mu[i,2], prec[i,2])
How does yield expand to in multiple dimension loop in Scala?
From Here we get to know that an expression like: for( i <- 1 to 10 ) yield i + 1 will expand into ( 1 to 10 ).map( _+1 ) But what does the following expression expand to? for( i <- 1 to 50 j <- i to 50 ) yield List(1,i,j) Is this correct? ( 1 to 50 ).map( x => (1 to 50 ).map(List(1,x,_)) I'm interested in this problem because I'd like to make a function which performs multiple Xi <- Xi-1 to 50 operations, as shown below: for( X1 <- 1 to 50 X2 <- X1 to 50 X3 <- X2 to 50 ..... Xn <- Xn-1 to 50 ) yield List(1,X1,X2,X3,.....,Xn) The function has one parameter: dimension which denotes the n in the above expression. Its return type is IndexSeq[List[Int]] How can I achieve that? Thank you for answering (:
It's well explained in a relevant doc. In particular: for(x <- c1; y <- c2; z <- c3) yield {...} will be translated into c1.flatMap(x => c2.flatMap(y => c3.map(z => {...}))) I don't think there is a way to abstract over arbitrary nested comprehension (unless you're using voodoo magic, like macros)
See om-nom-nom's answer for an explanation of what the for loops expand to. I'd like to answer the second part of the opening question, how to implement a function that can do: for( X1 <- 1 to 50 X2 <- X1 to 50 X3 <- X2 to 50 ..... Xn <- Xn to 50 ) yield List(1,X1,X2,X3,.....,Xn) You can use: def upto50(dimension: Int) = { def loop(n: Int, start: Int): IndexedSeq[List[Int]] = { if (n > dimension) IndexedSeq(List()) else { (n to 50).flatMap(x => loop(n + 1, x).map(x :: _)) } } loop(1, 1) } We compute each of the loops recursively, working inside-out, starting with Xn to 50 and building up the solution. Solutions for the more general case of: for( X1 <- S1 X2 <- S2 X3 <- S3 ..... Xn <- Sn ) yield List(1,X1,X2,X3,.....,Xn) Where S1..Sn are arbitraray sequences or monads are also possible. See this gist for the necessary wall of code.
WINBugs: array index is greater than array upper bound
I need help to find the error in my WINBUGS code (v. 1.4.3). While in the "Model Specification" step, the model looks syntatically correct. However, in my attempt to load the data, I got this error: array index is greater than array upper bound for phi3 Could someone please help me? My model is provided below: model { for(w in 1: W){ m[w] <- n[w]-y1[w] h[w] <- n[w]-y1[w]-y2[w] y1[w] ~ dbin(delta[w],n[w]) y2[w] ~ dbin(theta[w],m[w]) y3[w] ~ dbin(eta[w],h[w]) y4[w] <- n[w]-y1[w]-y2[w]-y3[w] logit(delta[w]) <- mu1+theta1[a[w]]+phi1[p[w]]+psi1[c[w]] logit(theta[w]) <- mu2+theta2[a[w]]+phi2[p[w]]+psi2[c[w]] logit(eta[w]) <- mu3+theta3[a[w]]+phi3[p[w]]+psi3[c[w]] } ## Autoregressive prior model for p effects phi1mean[1] <- 0.0 phi1prec[1] <- tauphi1*1.0E-6 phi1mean[2] <- 0.0 phi1prec[2] <- tauphi1*1.0E-6 phi2mean[1] <- 0.0 phi2prec[1] <- tauphi2*1.0E-6 phi2mean[2] <- 0.0 phi2prec[2] <- tauphi2*1.0E-6 phi3mean[1] <- 0.0 phi3prec[1] <- tauphi3*1.0E-6 phi3mean[2] <- 0.0 phi3prec[2] <- tauphi3*1.0E-6 phi4mean[1] <- 0.0 phi4prec[1] <- tauphi4*1.0E-6 phi4mean[2] <- 0.0 phi4prec[2] <- tauphi4*1.0E-6 for (j in 3:JJ) { phi1mean[j] <- 2*phi1[j-1]-phi1[j-2] phi1prec[j] <- tauphi1 phi2mean[j] <- 2*phi2[j-1]-phi2[j-2] phi2prec[j] <- tauphi2 phi3mean[j] <- 2*phi3[j-1]-phi3[j-2] phi3prec[j] <- tauphi3 phi4mean[j] <- 2*phi4[j-1]-phi4[j-2] phi4prec[j] <- tauphi4 } # Sampling p effects and subtracting mean for observed p for (j in 1:JJ) { phi1[j] ~ dnorm(phi1mean[j],phi1prec[j]) phi2[j] ~ dnorm(phi2mean[j],phi2prec[j]) phi3[j] ~ dnorm(phi3mean[j],phi3prec[j]) phi4[j] ~ dnorm(phi4mean[j],phi4prec[j]) phi1c[j] <- phi1[j]-mean(phi1[1:J]) phi2c[j] <- phi2[j]-mean(phi2[1:J]) phi3c[j] <- phi3[j]-mean(phi3[1:J]) phi4c[j] <- phi4[j]-mean(phi4[1:J]) } # Hyppriors for the precision parameters tauphi1 ~ dgamma(1.0E-1,1.0E-1) tauphi2 ~ dgamma(1.0E-1,1.0E-1) tauphi3 ~ dgamma(1.0E-1,1.0E-1) tauphi4 ~ dgamma(1.0E-1,1.0E-1) sigmaphi1 <- 1/sqrt(tauphi1) sigmaphi2 <- 1/sqrt(tauphi2) sigmaphi3 <- 1/sqrt(tauphi3) sigmaphi4 <- 1/sqrt(tauphi4) ## Autoregressive prior model for c effects psi1mean[1] <- 0.0 psi1prec[1] <- taupsi1*1.0E-6 psi1mean[2] <- 0.0 psi1prec[2] <- taupsi1*1.0E-6 psi2mean[1] <- 0.0 psi2prec[1] <- taupsi2*1.0E-6 psi2mean[2] <- 0.0 psi2prec[2] <- taupsi2*1.0E-6 psi3mean[1] <- 0.0 psi3prec[1] <- taupsi3*1.0E-6 psi3mean[2] <- 0.0 psi3prec[2] <- taupsi3*1.0E-6 psi4mean[1] <- 0.0 psi4prec[1] <- taupsi4*1.0E-6 psi4mean[2] <- 0.0 psi4prec[2] <- taupsi4*1.0E-6 for (l in 3:LL) { psi1mean[l] <- 2*psi1[l-1]-psi1[l-2] psi1prec[l] <- taupsi1 psi2mean[l] <- 2*psi2[l-1]-psi2[l-2] psi2prec[l] <- taupsi2 psi3mean[l] <- 2*psi3[l-1]-psi3[l-2] psi3prec[l] <- taupsi3 psi4mean[l] <- 2*psi4[l-1]-psi4[l-2] psi4prec[l] <- taupsi4 } # Sampling c effects and subtracting mean for observed c for (l in 1:LL) { psi1[l] ~ dnorm(psi1mean[l],psi1prec[l]) psi2[l] ~ dnorm(psi2mean[l],psi2prec[l]) psi3[l] ~ dnorm(psi3mean[l],psi3prec[l]) psi4[l] ~ dnorm(psi4mean[l],psi4prec[l]) psi1c[l] <- psi1[l]-mean(psi1[1:L]) psi2c[l] <- psi2[l]-mean(psi2[1:L]) psi3c[l] <- psi3[l]-mean(psi3[1:L]) psi4c[l] <- psi4[l]-mean(psi4[1:L]) } # Hyppriors for the precision parameters taupsi1 ~ dgamma(1.0E-1,1.0E-1) taupsi2 ~ dgamma(1.0E-1,1.0E-1) taupsi3 ~ dgamma(1.0E-1,1.0E-1) taupsi4 ~ dgamma(1.0E-1,1.0E-1) sigmapsi1 <- 1/sqrt(taupsi1) sigmapsi2 <- 1/sqrt(taupsi2) sigmapsi3 <- 1/sqrt(taupsi3) sigmapsi4 <- 1/sqrt(taupsi4) ## Autoregressive prior model for a effects theta1mean[1] <- 0.0 theta1prec[1] <- tautheta1*1.0E-6 theta1mean[2] <- 0.0 theta1prec[2] <- tautheta1*1.0E-6 theta2mean[1] <- 0.0 theta2prec[1] <- tautheta2*1.0E-6 theta2mean[2] <- 0.0 theta2prec[2] <- tautheta2*1.0E-6 theta3mean[1] <- 0.0 theta3prec[1] <- tautheta3*1.0E-6 theta3mean[2] <- 0.0 theta3prec[2] <- tautheta3*1.0E-6 theta4mean[1] <- 0.0 theta4prec[1] <- tautheta4*1.0E-6 theta4mean[2] <- 0.0 theta4prec[2] <- tautheta4*1.0E-6 for (i in 3:I) { theta1mean[i] <- 2*theta1[i-1]-theta1[i-2] theta1prec[i] <- tautheta1 theta2mean[i] <- 2*theta2[i-1]-theta2[i-2] theta2prec[i] <- tautheta2 theta3mean[i] <- 2*theta3[i-1]-theta3[i-2] theta3prec[i] <- tautheta3 theta4mean[i] <- 2*theta4[i-1]-theta4[i-2] theta4prec[i] <- tautheta4 } # Sampling a effects for (i in 1:I) { theta1[i] ~ dnorm(theta1mean[i],theta1prec[i]) theta2[i] ~ dnorm(theta2mean[i],theta2prec[i]) theta3[i] ~ dnorm(theta3mean[i],theta3prec[i]) theta4[i] ~ dnorm(theta4mean[i],theta4prec[i]) } # Hyppriors for the precision parameters tautheta1 ~ dgamma(1.0E-1,1.0E-1) tautheta2 ~ dgamma(1.0E-1,1.0E-1) tautheta3 ~ dgamma(1.0E-1,1.0E-1) tautheta4 ~ dgamma(1.0E-1,1.0E-1) sigmatheta1 <- 1/sqrt(tautheta1) sigmatheta2 <- 1/sqrt(tautheta2) sigmatheta3 <- 1/sqrt(tautheta3) sigmatheta4 <- 1/sqrt(tautheta4) # Removing linear trends from a for (i in 1:I) { ivec1[i] <- i-(I+1)/2 aivec1[i] <- ivec1[i]*theta1[i] theta1c[i] <- theta1[i]-ivec1[i]*sum(aivec1[])/(I*(I+1)*(I-1)/12) ivec2[i] <- i-(I+1)/2 aivec2[i] <- ivec2[i]*theta2[i] theta2c[i] <- theta2[i]-ivec2[i]*sum(aivec2[])/(I*(I+1)*(I-1)/12) ivec3[i] <- i-(I+1)/2 aivec3[i] <- ivec3[i]*theta3[i] theta3c[i] <- theta3[i]-ivec3[i]*sum(aivec3[])/(I*(I+1)*(I-1)/12) ivec4[i] <- i-(I+1)/2 aivec4[i] <- ivec4[i]*theta4[i] theta4c[i] <- theta4[i]-ivec4[i]*sum(aivec4[])/(I*(I+1)*(I-1)/12) } ## Computing fitted and projected probabilities for (i in 1:I) { for (j in 1:JJ) { deltapred[i,j] <- exp(mu1+theta1[i]+phi1[j]+psi1[I+j-i]) thetapred[i,j] <- exp(mu2+theta2[i]+phi2[j]+psi2[I+j-i]) etapred[i,j] <- exp(mu3+theta3[i]+phi3[j]+psi3[I+j-i]) p1[i,j] <- deltapred[i,j] p2[i,j] <- thetapred[i,j]*(1-deltapred[i,j]) p3[i,j] <- etapred[i,j]*(1-deltapred[i,j])*(1-thetapred[i,j]) p4[i,j] <- (1-deltapred[i,j])*(1-thetapred[i,j]-etapred[i,j]+(etapred[i,j]*thetapred[i,j])) } } } ### Data list( y1=c(1538727,1444672,1206999,1002960,744597,390301,1640130,1472255,1383947,1109395,984775,697701,1769569,1573498,1489025,1351284,1111397,935166,1747764,1790841,1626852,1407388,1284583,995236,1676555,1787181,1655400,1527122,1421772,1309989,1561922,1643467,1598855,1570645,1495999,1319439,1456258,1561892,1567872,1555237,1551579,1532222,1243436,1387943,1436659,1511134,1549578,1539580), y2=c(2634569,3031916,3138776,2875868,2495888,1886174,2148776,2567507,2747428,2696199,2593985,2138303,1662296,2224336,2489723,2698322,2655746,2450716,1304387,1734318,2180203,2396749,2629088,2555934,1087351,1380119,1616309,2109287,2408800,2369855,821642,1041702,1221283,1661647,2098345,2426842,708327,873092,952245,1237084,1628334,2123709,549763,666699,774205,981393,1243888,1538431), y3=c(1245931,1664176,1659375,2313647,3850196,4254634,825634,1293382,1454776,1736181,2596719,3655532,554953,901957,1186747,1490664,2083400,2738988,335824,630232,847486,1239538,1702256,2296941,218213,373786,555286,907876,1397221,2005940,143202,237344,344229,594993,1012777,1510283,121187,151070,219731,351040,650930,1157146,87211,120279,140551,226530,393887,733699), n=c(5862309,6673625,6534802,6942747,8329067,8152696,5049199,5913474,6268113,6253757,7298375,8260640,4319559,5245545,5840408,6306245,6785242,7492958,3588778,4553684,5259609,5813653,6517271,7001560,3105173,3797508,4271831,5180290,6086716,7002991,2591140,3063506,3428373,4305319,5326889,6217360,2329398,2661972,2886111,3418403,4327922,5565798,1906676,2224544,2444586,2864892,3473404,4362648), a=c(1,1,1,1,1,1,2,2,2,2,2,2,3,3,3,3,3,3,4,4,4,4,4,4,5,5,5,5,5,5,6,6,6,6,6,6,7,7,7,7,7,7,8,8,8,8,8,8), p=c(9,10,11,12,13,14,9,10,11,12,13,14,9,10,11,12,13,14,9,10,11,12,13,14,9,10,11,12,13,14,9,10,11,12,13,14,9,10,11,12,13,14,9,10,11,12,13,14), c=c(8,9,10,11,12,13,7,8,9,10,11,12,6,7,8,9,10,11,5,6,7,8,9,10,4,5,6,7,8,9,3,4,5,6,7,8,2,3,4,5,6,7,1,2,3,4,5,6), W=48, I=8, J=6, JJ=8, L=13, LL=15 ) # Inits list( tauphi1=1, tauphi2=1, tauphi3=1, tauphi4=1, taupsi1=1, taupsi2=1, taupsi3=1, taupsi4=1, tautheta1=1, tautheta2=1, tautheta3=1, tautheta4=1, theta1=c(0,0,0,0,0,0,0,0), theta2=c(0,0,0,0,0,0,0,0), theta3=c(0,0,0,0,0,0,0,0), theta4=c(0,0,0,0,0,0,0,0), phi1=c(0,0,0,0,0,0), phi2=c(0,0,0,0,0,0), phi3=c(0,0,0,0,0,0), phi4=c(0,0,0,0,0,0), psi1=c(0,0,0,0,0,0,0,0,0,0,0,0,0), psi2=c(0,0,0,0,0,0,0,0,0,0,0,0,0), psi3=c(0,0,0,0,0,0,0,0,0,0,0,0,0), psi4=c(0,0,0,0,0,0,0,0,0,0,0,0,0) )
In the definition of logit(eta[w]) you have used phi3[p[w]], and p[w] takes values from 9 to 14. But definitions of phi3[j] are only given for j = 1 to JJ=8. Hence "the array index (9 to 14) is greater than the array upper bound (8)"
Scala - Most elegant way of initialising values inside array that's already been declared?
I have a 3d array defined like so: val 3dArray = new Array[Array[Array[Int]]](512, 8, 8) In Javascript I would do the following to assign each element to 1: for (i = 0; i < 512; i++) { 3dArray[i] = []; for (j = 0; j < 8; j++) { 3dArray[i][j] = []; for (k = 0; k < 8; k++) { 3dArray[i][j][k] = 1; } } } What's the most elegant approach to doing the same?
Not sure there's a particularly elegant way to do it, but here's one way (I use suffix s to indicate dimension, i.e. xss is a two-dimensional array). val xsss = Array.ofDim[Int](512, 8, 8) for (xss <- xsss; xs <- xss; i <- 0 until 8) xs(i) = 1 Or, using transform it gets even shorter: for (xss <- xsss; xs <- xss) xs transform (_ => 1)
for { i <- a.indices j <- a(i).indices k <- a(i)(j).indices } a(i)(j)(k) = 1 or for { e <- a ee <- e i <- ee.indices } ee(i) = 1
See: http://www.scala-lang.org/api/current/index.html#scala.Array$ You have Array.fill to initialize an array of 1 to 5 dimension to some given value, and Array.tabulate to initialize an array of 1 to 5 dimension given the current indexes: scala> Array.fill(2,1,1)(42) res1: Array[Array[Array[Int]]] = Array(Array(Array(42)), Array(Array(42))) enter code here scala> Array.tabulate(3,2,1){ (x,y,z) => x+y+z } res2: Array[Array[Array[Int]]] = Array(Array(Array(0), Array(1)), Array(Array(1), Array(2)), Array(Array(2), Array(3)))
Nested iteration in Scala
What is the difference (if any) between two code fragments below? Example from Ch7 of Programming i Scala def grep(pattern: String) = for ( file <- filesHere if file.getName.endsWith(".scala"); line <- fileLines(file) if line.trim.matches(pattern) ) println(file + ": " + line.trim) and this one def grep2(pattern: String) = for ( file <- filesHere if file.getName.endsWith(".scala") ) for ( line <- fileLines(file) if line.trim.matches(pattern) ) println(file + ": " + line.trim) Or for (i <- 1 to 2) for (j <- 1 to 2) println(i, j) and for ( i <- 1 to 2; j <- 1 to 2 ) println(i, j)
In this case there is no difference. However when using yield there is: for ( i <- 1 to 2; j <- 1 to 2 ) yield (i, j) Will give you a sequence containing (1,1), (1,2), (2,1) and (2,2). for (i <- 1 to 2) for (j <- 1 to 2) yield (i, j) Will give you nothing, because it generates the sequence (i,1), (i,2) on each iteration and then throws it away.
Sometimes it is also useful to output a multi dimensional collection (for example a matrix of table): for (i <- 1 to 2) yield for (j <- 1 to 2) yield (i, j) Will return: Vector(Vector((1,1), (1,2)), Vector((2,1), (2,2)))