How do I implement the following method in Scala.js?
import scala.scalajs.js
def toScalaArray(input: js.typedarray.Uint8Array): Array[Byte] =
// code in question
edited per request: tl;dr
input.view.map(_.toByte).toArray
Original answer
I'm not intimately familiar with Scala-js, but I can elaborate on some of the questions that came up in the comments, and improve upon your self-answer.
Also I don't quite get why I need toByte calls
class Uint8Array extends Object with TypedArray[Short, Uint8Array]
Scala treats a Uint8Array as a collection of Short, whereas you are expecting it to be a collection of Byte
Uint8Array's toArray method notes:
This member is added by an implicit conversion from Uint8Array to
IterableOps[Short] performed by method iterableOps in scala.scalajs.js.LowestPrioAnyImplicits.
So the method is returning an Array[Short] which you then .map to convert the Shorts to Bytes.
In your answer you posted
input.toArray.map(_.toByte)
which is technically correct, but it has the downside of allocating an intermediate array of the Shorts. To avoid this allocation, you can perform the .map operation on a .view of the array, then call .toArray on the view.
Views in Scala (and by extension Scala.js) are lightweight objects that reference an original collection plus some kind of transformation/filtering function, which can be iterated like any other collection. You can compose many transformation/filters on a view without having to allocate intermediate collections to represent the results. See the docs page (linked) for more.
input.view.map(_.toByte).toArray
Depending on how you intend to pass the resulting value around, you may not even need to call .toArray. For example if all you need to do is iterate the elements later on, you could just pass the view around as an Iterable[Byte] without ever having to allocate a separate array.
All the current answers require iterating over the array in user space.
Scala.js has optimizer supported conversions for typed arrays (in fact, Array[Byte] are typed arrays in modern configs). You'll likely get better performance by doing this:
import scala.scalajs.js.typedarray._
def toScalaArray(input: Uint8Array): Array[Byte] = {
// Create a view as Int8 on the same underlying data.
new Int8Array(input.buffer, input.byteOffset, input.length).toArray
}
The additional new Int8Array is necessary to re-interpret the underlying buffer as signed (the Byte type is signed). Only then, Scala.js will provide the built in conversion to Array[Byte].
When looking at the generated code, you'll see no user space loop is necessary: The built-in slice method is used to copy the TypedArray. This will almost certainly not be beatable in terms of performance by any user-space loop.
$c_Lhelloworld_HelloWorld$.prototype.toScalaArray__sjs_js_typedarray_Uint8Array__AB = (function(input) {
var array = new Int8Array(input.buffer, $uI(input.byteOffset), $uI(input.length));
return new $ac_B(array.slice())
});
If we compare this with the currently accepted answer (input.view.map(_.toByte).toArray) we see quite a difference (comments mine):
$c_Lhelloworld_HelloWorld$.prototype.toScalaArray__sjs_js_typedarray_Uint8Array__AB = (function(input) {
var this$2 = new $c_sjs_js_IterableOps(input);
var this$5 = new $c_sc_IterableLike$$anon$1(this$2);
// We need a function
var f = new $c_sjsr_AnonFunction1(((x$1$2) => {
var x$1 = $uS(x$1$2);
return ((x$1 << 24) >> 24)
}));
new $c_sc_IterableView$$anon$1();
// Here's the view: So indeed no intermediate allocations.
var this$8 = new $c_sc_IterableViewLike$$anon$6(this$5, f);
var len = $f_sc_TraversableOnce__size__I(this$8);
var result = new $ac_B(len);
// This function actually will traverse.
$f_sc_TraversableOnce__copyToArray__O__I__V(this$8, result, 0);
return result
});
import scala.scalajs.js
def toScalaArray(input: js.typedarray.Uint8Array): Array[Byte] =
input.toArray.map(_.toByte)
Related
I get an error when I put the type and size of an array of classes
I have tried:
fun main(args :Array<String>) {
class modul() {
var nommodul: String? = null
var coeff: Int? = null
var note: Int? = null
}
var releve
class notes() {
var releve: array<modul>(10){""} here the erreur
}
}
First of all, your code has several errors. This might be an MCVE and/or copy-paste issue, but I need to address these before I get started on the arrays.
var releve before the notes class isn't allowed. You don't assign it, you don't declare a type, and the compiler will complain if you copy-paste the code from your question.
Second, the array var itself: Array is upper-case, and initialization is separate. This would be more valid (note that this still does not work - the solution for that comes later in this answer):
var releve: Array<modul> = Array(10) {...}
// or
var releve = Array<modul>(10) {...}
And the last thing before I start on the array itself: please read the language conventions, especially the naming ones. Your classes should all start with an upper-case letter.
Kotlin arrays are quite different from Java arrays in many ways, but the most notable one being that direct array initialization also requires an initializer.
The brackets are expected to create a new instance, which you don't. You create a String, which isn't, in your case, a modul.
There are several ways to fix this depending on how you want to do this.
If you have instances you want to add to the array, you can use arrayOf:
arrayOf(modulInstance, modulInstance2, ...)
If you want to create them directly, you can use your approach:
var releve = Array(10) { modul() }
A note about both of these: because of the initialization, you get automatic type inference and don't need to explicitly declare <modul>
If you want Java-style arrays, you need an array of nulls.
There's two ways to do this:
var releve = arrayOfNulls<modul>(10)
// or
var releve = Array<modul?>(10) { null }
I highly recommend the first one, because it's cleaner. I'm not sure if there's a difference performance-wise though.
Note that this does infer a nullable type to the array, but it lets you work with arrays in a similar way to Java. Initialization from this point is just like Java: releve[i] = modul(). This approach is mostly useful if you have arguments you want to add to each of the classes and you need to do so manually. Using the manual initializers also provides you with an index (see the documentation) which you can use while initializing.
Note that if you're using a for loop to initialize, you can use Array(10) { YourClass() } as well, and use the supplied index if you need any index-sensitive information, such as function arguments. There's of course nothing wrong with using a for loop, but it can be cleaner.
Further reading
Array
Lambdas
here some example of kotlin array initialization:
array of Library Method
val strings = arrayOf("January", "February", "March")
Primitive Arrays
val numbers: IntArray = intArrayOf(10, 20, 30, 40, 50)
Late Initialization with Indices
val array = arrayOfNulls<Number>(5)
for (i in array.indices) {
array[i] = i * i
}
See Kotlin - Basic Types for details
I am looking for examples of Chapel passing by reference. This example works but it seems like bad form since I am "returning" the input. Does this waste memory? Is there an explicit way to operate on a class?
class PowerPuffGirl {
var secretIngredients: [1..0] string;
}
var bubbles = new PowerPuffGirl();
bubbles.secretIngredients.push_back("sugar");
bubbles.secretIngredients.push_back("spice");
bubbles.secretIngredients.push_back("everything nice");
writeln(bubbles.secretIngredients);
proc kickAss(b: PowerPuffGirl) {
b.secretIngredients.push_back("Chemical X");
return b;
}
bubbles = kickAss(bubbles);
writeln(bubbles.secretIngredients);
And it produces the output
sugar spice everything nice
sugar spice everything nice Chemical X
What is the most efficient way to use a function to modify Bubbles?
Whether Chapel passes an argument by reference or not can be controlled by the argument intent. For example, integers normally pass by value but we can pass one by reference:
proc increment(ref x:int) { // 'ref' here is an argument intent
x += 1;
}
var x:int = 5;
increment(x);
writeln(x); // outputs 6
The way that a type passes when you don't specify an argument is known as the default intent. Chapel passes records, domains, and arrays by reference by default; but of these only arrays are modifiable inside the function. ( Records and domains pass by const ref - meaning they are passed by reference but that the function they are passed to cannot modify them. Arrays pass by ref or const ref depending upon what the function does with them - see array default intent ).
Now, to your question specifically, class instances pass by "value" by default, but Chapel considers the "value" of a class instance to be a pointer. That means that instead of allowing a field (say) to be mutated, passing a class instance by ref just means that it could be replaced with a different class instance. There isn't currently a way to say that a class instance's fields should not be modifiable in the function (other than making them to be explicitly immutable data types).
Given all of that, I don't see any inefficiencies with the code sample you provided in the question. In particular, here:
proc kickAss(b: PowerPuffGirl) {
b.secretIngredients.push_back("Chemical X");
return b;
}
the argument accepting b will receive a copy of the pointer to the instance and the return b will return a copy of that pointer. The contents of the instance (in particular the secretIngredients array) will remain stored where it was and won't be copied in the process.
One more thing:
This example works but it seems like bad form since I am "returning" the input.
As I said, this isn't really a problem for class instances or integers. What about an array?
proc identity(A) {
return A;
}
var A:[1..100] int;
writeln(identity(A));
In this example, the return A in identity() actually does cause a copy of the array to be made. That copy wasn't created when passing the array in to identity(), since the array was passed by with a const ref intent. But, since the function returns something "by value" that was a reference, it's necessary to copy it as part of returning. See also arrays return by value by default in the language evolution document.
In any case, if one wants to return an array by reference, it's possible to do so with the ref or const ref return intent, e.g.:
proc refIdentity(ref arg) ref {
return arg;
}
var B:[1..10] int;
writeln(refIdentity(B));
Now there is no copy of the array and everything is just referring to the same B.
Note though that it's currently possible to write programs that return a reference to a variable that no longer exists. The compiler includes some checking in that area but it's not complete. Hopefully improvements in that area are coming soon.
Encountering a problem whereby I am specifying Private Constants at the start of a scala step definiton file which relies on a List Buffer element to be populated, however when compiling I get a 'IndexOutOfBoundsException' because the list is empty initially and only gets populated later in a for loop.
For Example I have the following 2 constants:
private val ConstantVal1= globalExampleList(2)
private val ConstantVal2= globalExampleList(3)
globalExampleList is populated further down in the file using a for loop:
for (i <- 1 to numberOfW) {
globalExampleList += x.xy }
This List Buffer adds as many values as required to a global mutable ListBuffer.
Is there a better way to declare these constants? I've tried to declare them after the for loop but then other methods are not able to access these. I have around 4 different methods within the same file which use these values and instead of accessing it via index each time i thought it would be better to declare them as a constant to keep it neat and efficient for whenever they require changing.
Thanks
You can create list buffer of necessary size with default value and populate it later:
val globalExampleList: ListBuffer[Int] = ListBuffer.fill(numberOfW)(0)
for (i <- 0 until numberOfW) {
globalExampleList(i) = x.xy
}
But ConstantVal1, ConstantVal2 will still have original default value. So you can make them vars and re-assign them after you populate the buffer.
Your code seems to have a lot of mutations and side effects.
You have 2 ways to go.
First you can use lazy modifier
private lazy val ConstantVal1= globalExampleList(2)
private lazy val ConstantVal2= globalExampleList(3)
Or you can write the two lines after the for loop.
val globalExampleList = XXXX
for (i <- 1 to numberOfW) { globalExampleList += x.xy }
private val ConstantVal1= globalExampleList(2)
private val ConstantVal2= globalExampleList(3)
One question I have about current Scala couchdb drivers is whether they can work with "partial" schemas". I'll try to explain what I mean: the libraries I've see seem to all want to do a complete conversion from JSON docs in the database to a Scala object, handle the Scala object, and convert it back to JSON. This is is fine if your application knows everything about that type of object --- especially if it is the sole piece of software interacting with that database. However, what if I want to write a little application that only knows about part of the JSON object: for example, what if I'm only interested in a 'mybook' component embedded like this:
{
_id: "0ea56a7ec317138700743cdb740f555a",
_rev: "2-3e15c3acfc3936abf10ea4f84a0aeced",
type: "user",
profiles: {
mybook: {
key: "AGW45HWH",
secret: "g4juh43ui9hg929gk4"
},
.. 6 or 7 other profiles
},
.. lots of other stuff
}
I really don't want to convert the whole JSON AST to a Scala object. On the other hand, in couchdb, you must save back the entire JSON doc, so this needs to be preserved somehow. I think what I really what is something like this:
class MyBook {
private val userJson: JObject = ... // full JSON retrieved from the database
lazy val _id: String = ... // parsed from the JSON
lazy val _rev: String = ... // parsed from the JSON
lazy val key: String = ... // parsed from the JSON
lazy val secret: String = ... // (ditto)
def withSecret(secret: String): MyBook = ... // new object with altered userJson
def save(db: CouchDB) = ... // save userJson back to couchdb
}
Advantages:
computationally cheaper to extract only needed fields
don't have to sync with database evolution except for 'mybook' part
more suitable for development with partial schemas
safer, because there is less change as inadvertently deleting fields if we didn't keep up with the database schema
Disadavantages:
domain objects in Scala are not pristinely independent of couch/JSON
more memory use per object
Is this possible with any of the current Scala drivers? With either of scouchdb or the new Sohva library, it seems not.
As long as you have a good JSON library and a good HTTP client library, implementing a schemaless CouchDB client library is really easy.
Here is an example in Java: code, tests.
My couchDB library uses spray-json for (de)serialization, which is very flexible and would enable you to ignore parts of a document but still save it. Let's look at a simplified example:
Say we have a document like this
{
dontcare: {
...
},
important: "foo"
}
Then you could declare a class to hold information from this document and define how the conversion is done:
case class Dummy(js:JsValue)
case class PartialDoc(dontcare: Dummy, important: String)
implicit object DummyFormat extends JsonFormat[Dummy] {
override def read(js:JsValue):Dummy = Dummy(js)
override def write(d:Dummy):JsValue = d.js
}
implicit val productFormat = jsonFormat2(PartialDoc)
This will ignore anything in dontcare but still safe it as a raw JSON AST. Of course this example is not as complex as the one in your question, but it should give you an idea how to solve your problem.
I have two methods like so:
Foo[] GetFoos(Type t) { //do some stuff and return an array of things of type T }
T[] GetFoos<T>()
where T : Foo
{
return GetFoos(typeof(T)) as T[];
}
However, this always seems to return null. Am I doing things wrong or is this just a shortfall of C#?
Nb:
I know I could solve this problem with:
GetFoos(typeof(T)).Cast<T>().ToArray();
However, I would prefer to do this wothout any allocations (working in an environment very sensitive to garbage collections).
Nb++:
Bonus points if you suggest an alternative non allocating solution
Edit:
This raises an interesting question. The MSDN docs here: http://msdn.microsoft.com/en-us/library/aa664572%28v=vs.71%29.aspx say that the cast will succeed if there is an implicit or explicit cast. In this case there is an explicit cast, and so the cast should succeed. Are the MSDN docs wrong?
No, C# casting isn't useless - you simply can't cast a Foo[] to a T[] where T is a more derived type, as the Foo[] could contain other elements different to T. Why don't you adjust your GetFoos method to GetFoos<T>()? A method only taking a Type object can easily be converted into a generic method, where you could create the array directly via new T[].
If this is not possible: Do you need the abilities an array offers (ie. indexing and things like Count)? If not, you can work with an IEnumerable<T> without having much of a problem. If not: you won't get around going the Cast<T>.ToArray() way.
Edit:
There is no possible cast from Foo[] to T[], the description in your link is the other way round - you could cast a T[] to a Foo[] as all T are Foo, but not all Foo are T.
If you can arrange for GetFoos to create the return array using new T[], then you win. If you used new Foo[], then the array's type is fixed at that, regardless of the types of the objects it actually holds.
I haven't tried this, but it should work:
T[] array = Array.ConvertAll<Foo, T>(input,
delegate(Foo obj)
{
return (T)obj;
});
You can find more at http://msdn.microsoft.com/en-us/library/exc45z53(v=VS.85).aspx
I think this converts in-place, so it won't be doing any re-allocations.
From what I understand from your situation, using System.Array in place of a more specific array can help you. Remember, Array is the base class for all strongly typed arrays so an Array reference can essentially store any array. You should make your (generic?) dictionary map Type -> Array so you may store any strongly typed array also while not having to worry about needing to convert one array to another, now it's just type casting.
i.e.,
Dictionary<Type, Array> myDict = ...;
Array GetFoos(Type t)
{
// do checks, blah blah blah
return myDict[t];
}
// and a generic helper
T[] GetFoos<T>() where T: Foo
{
return (T[])GetFoos(typeof(T));
}
// then accesses all need casts to the specific type
Foo[] f = (Foo[])GetFoos(typeof(Foo));
DerivedFoo[] df = (DerivedFoo[])GetFoos(typeof(DerivedFoo));
// or with the generic helper
AnotherDerivedFoo[] adf = GetFoos<AnotherDerivedFoo>();
// etc...
p.s., The MSDN link that you provide shows how arrays are covariant. That is, you may store an array of a more derived type in a reference to an array of a base type. What you're trying to achieve here is contravariance (i.e., using an array of a base type in place of an array of a more derived type) which is the other way around and what arrays can't do without doing a conversion.