RuntimeError: Given groups=1, weight of size [32, 1, 5, 5], expected input[256, 3, 256, 256] to have 1 channels, but got 3 channels instead - neural-network

I am trying to run following code but getting an error:
import torch.nn as nn
import torch.nn.functional as F
class EmbeddingNet(nn.Module):
def __init__(self):
super(EmbeddingNet, self).__init__()
self.convnet = nn.Sequential(nn.Conv2d(1, 32, 5), nn.PReLU(),
nn.MaxPool2d(2, stride=2),
nn.Conv2d(32, 64, 5), nn.PReLU(),
nn.MaxPool2d(2, stride=2))
self.fc = nn.Sequential(nn.Linear(64 * 4 * 4, 256),
nn.PReLU(),
nn.Linear(256, 256),
nn.PReLU(),
nn.Linear(256, 2)
)
def forward(self, x):
output = self.convnet(x)
output = output.view(output.size()[0], -1)
output = self.fc(output)
return output
def get_embedding(self, x):
return self.forward(x)
class EmbeddingNetL2(EmbeddingNet):
def __init__(self):
super(EmbeddingNetL2, self).__init__()
def forward(self, x):
output = super(EmbeddingNetL2, self).forward(x)
output /= output.pow(2).sum(1, keepdim=True).sqrt()
return output
def get_embedding(self, x):
return self.forward(x)'''enter code here

Error is very simple .Its saying instead of 1 channel you have given 3 channel images.
one change would be in this block
class EmbeddingNet(nn.Module):
def __init__(self):
super(EmbeddingNet, self).__init__()
self.convnet = nn.Sequential(nn.Conv2d(3, 32, 5), #instead of 1 i have made it 3
nn.PReLU(),
nn.MaxPool2d(2, stride=2),
nn.Conv2d(32, 64, 5), nn.PReLU(),
nn.MaxPool2d(2, stride=2))
self.fc = nn.Sequential(nn.Linear(64 * 4 * 4, 256),
nn.PReLU(),
nn.Linear(256, 256),
nn.PReLU(),
nn.Linear(256, 2)
)
EDIT to next error:
change to this
self.fc = nn.Sequential(nn.Linear(64 * 61 * 61, 256), #here is the change
nn.PReLU(),
nn.Linear(256, 256),
nn.PReLU(),
nn.Linear(256, 2)
)

Related

Scala flattening loses the desired grouping of the subsets by size

Count of the number of sum of the cubes equal to the target value.
For a small number of sets this code works (target is 100 vs 1000). When the target value increases the system runs out of resources. I have not flattened allsets with the intention of only creating & processing the smaller subsets as needed.
How do I lazily create/use the subsets by size until the sums for all the Sets of one size equal or exceed the target, at which point nothing more needs to be examined because the rest of the sums will be larger than the target.
val target = 100; val exp = 3; val maxi = math.pow(target, 1.0/exp).toInt;
target: Int = 100
exp: Int = 3
maxi: Int = 4
val allterms=(1 to maxi).map(math.pow(_,exp).toInt).to[Set];
allterms: Set[Int] = Set(1, 8, 27, 64)
val allsets = (1 to maxi).map(allterms.subsets(_).to[Vector]); allsets.mkString("\n");
allsets: scala.collection.immutable.IndexedSeq[Vector[scala.collection.immutable.Set[Int]]] = Vector(Vector(Set(1), Set(8), Set(27), Set(64)), Vector(Set(1, 8), Set(1, 27), Set(1, 64), Set(8, 27), Set(8, 64), Set(27, 64)), Vector(Set(1, 8, 27), Set(1, 8, 64), Set(1, 27, 64), Set(8, 27, 64)), Vector(Set(1, 8, 27, 64)))
res7: String =
Vector(Set(1), Set(8), Set(27), Set(64))
Vector(Set(1, 8), Set(1, 27), Set(1, 64), Set(8, 27), Set(8, 64), Set(27, 64))
Vector(Set(1, 8, 27), Set(1, 8, 64), Set(1, 27, 64), Set(8, 27, 64))
Vector(Set(1, 8, 27, 64))
allsets.flatten.map(_.sum).filter(_==target).size;
res8: Int = 1
This implementation loses the separation of the subsets by size.
You can add laziness to your calculations in two ways:
Use combinations() instead of subsets(). This creates an Iterator so the combination (collection of Int values) won't be realized until it is needed.
Use a Stream (or LazyList if Scala 2.13.0) so that each "row" (same sized combinations) won't be realized until it is needed.
Then you can trim the number of rows to be realized by using the fact that the first combination of each row is going to have the minimum sum of that row.
val target = 125
val exp = 2
val maxi = math.pow(target, 1.0/exp).toInt //maxi: Int = 11
val allterms=(1 to maxi).map(math.pow(_,exp).toInt)
//allterms = Seq(1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121)
val allsets = Stream.range(1,maxi+1).map(allterms.combinations)
//allsets: Stream[Iterator[IndexedSeq[Int]]] = Stream(<iterator>, ?)
// 11 rows, 2047 combinations, all unrealized
allsets.map(_.map(_.sum).buffered) //Stream[BufferedIterator[Int]]
.takeWhile(_.head <= target) // 6 rows
.flatten // 1479 combinations
.count(_ == target)
//res0: Int = 5

Merging time distributed tensor gives 'inbound node error'

In my network, I have some time distributed convolutions.Batch size = 1 image, which breaks down into 32 sub-images, for each sub-image, 3 feature of dimension 6x6x256. I need to merge all the 3 features corresponding to a particular image.
Tensor definition are like:
out1 = TimeDistributed(Convolution2D(256, (3, 3), strides=(2, 2), padding='same', activation='relu'))(out1)
out2 = TimeDistributed(Convolution2D(256, (3, 3), strides = (2,2), padding='same', activation='relu'))(out2)
out3 = TimeDistributed(Convolution2D(256, (1, 1), padding='same', activation='relu'))(out3)
out1: <tf.Tensor 'time_distributed_3/Reshape_1:0' shape=(1, 32, 6, 6, 256) dtype=float32>
out2: <tf.Tensor 'time_distributed_5/Reshape_1:0' shape=(1, 32, 6, 6, 256) dtype=float32>
out4: <tf.Tensor 'time_distributed_6/Reshape_1:0' shape=(1, 32, 6, 6, 256) dtype=float32>
I have tried different techniques to merge like TimeDistributed(merge(... )), etc but nothing works.
out = Lambda(lambda x:merge([x[0],x[1],x[2]],mode='concat'))([out1,out2,out3])
It gives correct dimension tensor (1,32,6,6,768), but then it further passes through some flatten and dense layers. When i build the model like
model = Model( .... , .... ), it gives error
File "/home/adityav/.virtualenvs/cv/local/lib/python2.7/site-packages/keras/engine/topology.py", line 1664, in build_map_of_graph
next_node = layer.inbound_nodes[node_index]
AttributeError: 'NoneType' object has no attribute 'inbound_nodes'
Any idea on how to do this time distributed concatenation, when the tensors are 5dimensional.
Thanks

ScalaCheck: choose an integer with custom probability distribution

I want to create a generator in ScalaCheck that generates numbers between say 1 and 100, but with a bell-like bias towards numbers closer to 1.
Gen.choose() distributes numbers randomly between the min and max value:
scala> (1 to 10).flatMap(_ => Gen.choose(1,100).sample).toList.sorted
res14: List[Int] = List(7, 21, 30, 46, 52, 64, 66, 68, 86, 86)
And Gen.chooseNum() has an added bias for the upper and lower bounds:
scala> (1 to 10).flatMap(_ => Gen.chooseNum(1,100).sample).toList.sorted
res15: List[Int] = List(1, 1, 1, 61, 85, 86, 91, 92, 100, 100)
I'd like a choose() function that would give me a result that looks something like this:
scala> (1 to 10).flatMap(_ => choose(1,100).sample).toList.sorted
res15: List[Int] = List(1, 1, 1, 2, 5, 11, 18, 35, 49, 100)
I see that choose() and chooseNum() take an implicit Choose trait as an argument. Should I use that?
You could use Gen.frequency() (1):
val frequencies = List(
(50000, Gen.choose(0, 9)),
(38209, Gen.choose(10, 19)),
(27425, Gen.choose(20, 29)),
(18406, Gen.choose(30, 39)),
(11507, Gen.choose(40, 49)),
( 6681, Gen.choose(50, 59)),
( 3593, Gen.choose(60, 69)),
( 1786, Gen.choose(70, 79)),
( 820, Gen.choose(80, 89)),
( 347, Gen.choose(90, 100))
)
(1 to 10).flatMap(_ => Gen.frequency(frequencies:_*).sample).toList
res209: List[Int] = List(27, 21, 31, 1, 21, 18, 9, 29, 69, 29)
I got the frequencies from https://en.wikipedia.org/wiki/Standard_normal_table#Complementary_cumulative. The code is just a sample of the table (% 3 or mod 3), but I think you can get the idea.
I can't take much credit for this, and will point you to this excellent page:
http://www.javamex.com/tutorials/random_numbers/gaussian_distribution_2.shtml
A lot of this depends what you mean by "bell-like". Your example doesn't show any negative numbers but the number "1" can't be in the middle of the bell and not produce any negative numbers unless it was a very, very tiny bell!
Forgive the mutable loop but I use them sometimes when I have to reject values in a collection build:
object Test_Stack extends App {
val r = new java.util.Random()
val maxBellAttempt = 102
val stdv = maxBellAttempt / 3 //this number * 3 will happen about 99% of the time
val collectSize = 100000
var filled = false
val l = scala.collection.mutable.Buffer[Int]()
//ref article above "What are the minimum and maximum values with nextGaussian()?"
while(l.size < collectSize){
val temp = (r.nextGaussian() * stdv + 1).abs.round.toInt //the +1 is the mean(avg) offset. can be whatever
//the abs is clipping the curve in half you could remove it but you'd need to move the +1 over more
if (temp <= maxBellAttempt) l+= temp
}
val res = l.to[scala.collection.immutable.Seq]
//println(res.mkString("\n"))
}
Here's the distribution I just pasted the output into excel and did a "countif" to show the freq of each:

Generate sequence with unknown bound, based on condition

I want to generate sequance of all fibonacci numbers, that less then 10000
For example, this will generate 40 fibonacci numbers. But i want to stop generate them with some condition. How can i do this?
def main(args: Array[String]) {
val fibonacciSequence = for(i <- 1 to 40) yield fibonacci(i)
println(fibonacciSequence)
}
def fibonacci(i: Int) : Int = i match {
case 0 => 0
case 1 => 1
case _ => fibonacci(i - 1) + fibonacci(i - 2);
}
I want something like this: for(i <- 1 to ?; stop if fibonacci(i) > 100000)
This method, involving lazy infinite collection calculation, could produce suitable result:
import scala.Numeric.Implicits._
def fibonacci[N: Numeric](a: N, b: N): Stream[N] = a #:: fibonacci(b, a + b)
so
fibonacci(0L,1L).takeWhile(_ < 1000L).toList
yields
List(0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987)
If you don't want to use intermediate value to cache collection of proper type and init, you could just declare a val like this:
val fib: Stream[Long] = 0 #:: 1 #:: (fib zip fib.tail map { case (a, b) => a + b })
Using iterators and memoization (computing the current result based in the latest ones, not recomputing what has already been done), (method from Rosetta, similar to Odomontois's streams),
def fib() = Iterator.iterate((0,1)){ case (a,b) => (b,a+b)}.map(_._1)
To get the first nth values consider for instance,
def nfib(n: Int) = fib().zipWithIndex.takeWhile(_._2 < n).map(_._1).toArray
To get consecutive values up to a given condition or predicate,
def pfib(p: Int => Boolean) = fib().takeWhile(p).toArray
Thus, for example
nfib(10)
Array(0, 1, 1, 2, 3, 5, 8, 13, 21, 34)
pfib( _ < 55)
Array(0, 1, 1, 2, 3, 5, 8, 13, 21, 34)

Ugly number implementation in scala

I am trying to implement ugly number sequence generation in Scala.
Ugly numbers are numbers whose only prime factors are 2, 3 or 5. The sequence 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15...
I have implemented using var keyword like java implementation and it is working fine. Here is the ideone link of complete code: http://ideone.com/qxMEBw
Can someone suggest better of implementing it using Scala idioms and without using mutable values.
Pasting code here for reference :
/**
* Ugly numbers are numbers whose only prime factors are 2, 3 or 5. The sequence
* 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15,
* shows the first 11 ugly numbers. By convention, 1 is included.
* Write a program to find and print the 150th ugly number.
*
*/
object UglyNumbers extends App {
var uglyNumbers = List(1)
val n = 11
var i2 = 0;
var i3 = 0;
var i5 = 0;
// initialize three choices for the next ugly numbers
var next_multiple_2 = uglyNumbers(i2) * 2;
var next_multiple_3 = uglyNumbers(i3) * 3;
var next_multiple_5 = uglyNumbers(i5) * 5;
for (i <- 0 to n) {
val nextUglyNumber = min(next_multiple_2, next_multiple_3, next_multiple_5)
uglyNumbers = uglyNumbers :+ nextUglyNumber
if (nextUglyNumber == next_multiple_2) {
i2 = i2 + 1
next_multiple_2 = uglyNumbers(i2) * 2
}
if (nextUglyNumber == next_multiple_3) {
i3 = i3 + 1
next_multiple_3 = uglyNumbers(i3) * 3
}
if (nextUglyNumber == next_multiple_5) {
i5 = i5 + 1
next_multiple_5 = uglyNumbers(i5) * 5
}
}
for (uglyNumber <- uglyNumbers)
print(uglyNumber + " ")
def min(a: Int, b: Int, c: Int): Int = (a, b, c) match {
case _ if (a <= b && a <= c) => a
case _ if (b <= a && b <= c) => b
case _ => c
}
}
could take a look at following codes, using stream & recursion:
object App extends App {
val ys = Array(2, 3, 5)
def uglynumber(n: Int): Boolean =
n match {
case x if x == 1 => true
case x if x % 5 == 0 => uglynumber(x / 5)
case x if x % 3 == 0 => uglynumber(x / 3)
case x if x % 2 == 0 => uglynumber(x / 2)
case _ => false
}
def uglynumbers: Stream[Int] = {
def go(x: Int): Stream[Int] =
if (uglynumber(x)) x #:: go(x + 1)
else go(x + 1)
go(1)
}
println(uglynumbers.take(30).toList.sorted)
}
The output for the first 30 ugly numbers:
List(1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36, 40, 45, 48, 50, 54, 60, 64, 72, 75, 80)
revise it to use your way
def nums: Stream[Int] = {
def go(a: Int, b: Int, c: Int): Stream[Int] = {
val xs = nums.take(a.max(b.max(c))).toArray
val a2 = 2 * xs(a - 1)
val b3 = 3 * xs(b - 1)
val c5 = 5 * xs(c - 1)
if (a2 <= b3 && a2 <= c5) a2 #:: go(a + 1, b, c)
else if (b3 <= a2 && b3 <= c5) b3 #:: go(a, b + 1, c)
else c5 #:: go(a, b, c + 1)
}
(1 #:: go(1, 1, 1)).distinct
}
println(nums.take(30).toList)
So, how about this one:
scala> lazy val ugly: Stream[Int] = 1 #:: Stream.from(2).filter{ n =>
| ugly.takeWhile(n/2>=).flatten(x => Seq(2, 3, 5).map(x*)).contains(n)
| }
warning: there were 2 feature warning(s); re-run with -feature for details
ugly: Stream[Int] = <lazy>
scala> ugly.take(30).toList
res5: List[Int] = List(1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18,
20, 24, 25, 27, 30, 32, 36, 40, 45, 48, 50, 54, 60, 64, 72, 75, 80)