I get an error when I put the type and size of an array of classes
I have tried:
fun main(args :Array<String>) {
class modul() {
var nommodul: String? = null
var coeff: Int? = null
var note: Int? = null
}
var releve
class notes() {
var releve: array<modul>(10){""} here the erreur
}
}
First of all, your code has several errors. This might be an MCVE and/or copy-paste issue, but I need to address these before I get started on the arrays.
var releve before the notes class isn't allowed. You don't assign it, you don't declare a type, and the compiler will complain if you copy-paste the code from your question.
Second, the array var itself: Array is upper-case, and initialization is separate. This would be more valid (note that this still does not work - the solution for that comes later in this answer):
var releve: Array<modul> = Array(10) {...}
// or
var releve = Array<modul>(10) {...}
And the last thing before I start on the array itself: please read the language conventions, especially the naming ones. Your classes should all start with an upper-case letter.
Kotlin arrays are quite different from Java arrays in many ways, but the most notable one being that direct array initialization also requires an initializer.
The brackets are expected to create a new instance, which you don't. You create a String, which isn't, in your case, a modul.
There are several ways to fix this depending on how you want to do this.
If you have instances you want to add to the array, you can use arrayOf:
arrayOf(modulInstance, modulInstance2, ...)
If you want to create them directly, you can use your approach:
var releve = Array(10) { modul() }
A note about both of these: because of the initialization, you get automatic type inference and don't need to explicitly declare <modul>
If you want Java-style arrays, you need an array of nulls.
There's two ways to do this:
var releve = arrayOfNulls<modul>(10)
// or
var releve = Array<modul?>(10) { null }
I highly recommend the first one, because it's cleaner. I'm not sure if there's a difference performance-wise though.
Note that this does infer a nullable type to the array, but it lets you work with arrays in a similar way to Java. Initialization from this point is just like Java: releve[i] = modul(). This approach is mostly useful if you have arguments you want to add to each of the classes and you need to do so manually. Using the manual initializers also provides you with an index (see the documentation) which you can use while initializing.
Note that if you're using a for loop to initialize, you can use Array(10) { YourClass() } as well, and use the supplied index if you need any index-sensitive information, such as function arguments. There's of course nothing wrong with using a for loop, but it can be cleaner.
Further reading
Array
Lambdas
here some example of kotlin array initialization:
array of Library Method
val strings = arrayOf("January", "February", "March")
Primitive Arrays
val numbers: IntArray = intArrayOf(10, 20, 30, 40, 50)
Late Initialization with Indices
val array = arrayOfNulls<Number>(5)
for (i in array.indices) {
array[i] = i * i
}
See Kotlin - Basic Types for details
Related
How do I implement the following method in Scala.js?
import scala.scalajs.js
def toScalaArray(input: js.typedarray.Uint8Array): Array[Byte] =
// code in question
edited per request: tl;dr
input.view.map(_.toByte).toArray
Original answer
I'm not intimately familiar with Scala-js, but I can elaborate on some of the questions that came up in the comments, and improve upon your self-answer.
Also I don't quite get why I need toByte calls
class Uint8Array extends Object with TypedArray[Short, Uint8Array]
Scala treats a Uint8Array as a collection of Short, whereas you are expecting it to be a collection of Byte
Uint8Array's toArray method notes:
This member is added by an implicit conversion from Uint8Array to
IterableOps[Short] performed by method iterableOps in scala.scalajs.js.LowestPrioAnyImplicits.
So the method is returning an Array[Short] which you then .map to convert the Shorts to Bytes.
In your answer you posted
input.toArray.map(_.toByte)
which is technically correct, but it has the downside of allocating an intermediate array of the Shorts. To avoid this allocation, you can perform the .map operation on a .view of the array, then call .toArray on the view.
Views in Scala (and by extension Scala.js) are lightweight objects that reference an original collection plus some kind of transformation/filtering function, which can be iterated like any other collection. You can compose many transformation/filters on a view without having to allocate intermediate collections to represent the results. See the docs page (linked) for more.
input.view.map(_.toByte).toArray
Depending on how you intend to pass the resulting value around, you may not even need to call .toArray. For example if all you need to do is iterate the elements later on, you could just pass the view around as an Iterable[Byte] without ever having to allocate a separate array.
All the current answers require iterating over the array in user space.
Scala.js has optimizer supported conversions for typed arrays (in fact, Array[Byte] are typed arrays in modern configs). You'll likely get better performance by doing this:
import scala.scalajs.js.typedarray._
def toScalaArray(input: Uint8Array): Array[Byte] = {
// Create a view as Int8 on the same underlying data.
new Int8Array(input.buffer, input.byteOffset, input.length).toArray
}
The additional new Int8Array is necessary to re-interpret the underlying buffer as signed (the Byte type is signed). Only then, Scala.js will provide the built in conversion to Array[Byte].
When looking at the generated code, you'll see no user space loop is necessary: The built-in slice method is used to copy the TypedArray. This will almost certainly not be beatable in terms of performance by any user-space loop.
$c_Lhelloworld_HelloWorld$.prototype.toScalaArray__sjs_js_typedarray_Uint8Array__AB = (function(input) {
var array = new Int8Array(input.buffer, $uI(input.byteOffset), $uI(input.length));
return new $ac_B(array.slice())
});
If we compare this with the currently accepted answer (input.view.map(_.toByte).toArray) we see quite a difference (comments mine):
$c_Lhelloworld_HelloWorld$.prototype.toScalaArray__sjs_js_typedarray_Uint8Array__AB = (function(input) {
var this$2 = new $c_sjs_js_IterableOps(input);
var this$5 = new $c_sc_IterableLike$$anon$1(this$2);
// We need a function
var f = new $c_sjsr_AnonFunction1(((x$1$2) => {
var x$1 = $uS(x$1$2);
return ((x$1 << 24) >> 24)
}));
new $c_sc_IterableView$$anon$1();
// Here's the view: So indeed no intermediate allocations.
var this$8 = new $c_sc_IterableViewLike$$anon$6(this$5, f);
var len = $f_sc_TraversableOnce__size__I(this$8);
var result = new $ac_B(len);
// This function actually will traverse.
$f_sc_TraversableOnce__copyToArray__O__I__V(this$8, result, 0);
return result
});
import scala.scalajs.js
def toScalaArray(input: js.typedarray.Uint8Array): Array[Byte] =
input.toArray.map(_.toByte)
With arrays you can use a subscript to access Array Elements directly. You can read or write to them. With Sets I am not sure of a way to write its Elements.
For example, if I access a set element matching a condition I'm only able to read the element. It is passed by copy and I can't therefore write to the original.
For example:
columns.first(
where: {
$0.header.last == Character(String(i))
}
)?.cells.append(value: addValue)
// ERROR: Cannot use mutating member on immutable value: function call returns immutable value
You can't just change things inside a set, because of how a (hash) set works. Changing them would possibly change their hash value, making the set into an invalid state.
Therefore, you would have to take the thing you want to change out of the set, change it, then put it back.
if var thing = columns.first(
where: {
$0.header.last == Character(String(i))
}) {
columns.remove(thing)
thing.cells.append(value: addValue)
columns.insert(thing)
}
If the == operator on Column doesn't care about cells (i.e. adding cells to a column doesn't suddenly make two originally equal columns unequal and vice versa), then you could use update instead:
if var thing = columns.first(
where: {
$0.header.last == Character(String(i))
}) {
thing.cells.append(value: addValue)
columns.update(thing)
}
As you can see, it's quite a lot of work, so maybe sets aren't a suitable data structure to use in this situation. Have you considered using an array instead? :)
private var _columns: [Column]
public var columns : [Column] {
get { _columns }
set { _columns = Array(Set(newValue)) }
// or any other way to remove duplicate as described here: https://stackoverflow.com/questions/25738817/removing-duplicate-elements-from-an-array-in-swift
}
You are getting the error because columns might be a set of struct. So columns.first will give you an immutable value. If you were to use a class, you will get a mutable result from columns.first and your code will work as expected.
Otherwise, you will have to do as explained by #Sweeper in his answer.
I have a model class in Swift, whose primary purpose is to contain an array of custom objects, but also has other methods/properties etc.
public class Budget: NSObject, NSCoding {
var lineItems : [LineItem] = []
// Other methods
// Other properties
}
As I understand it, it's best practice to not make the property publicly settable, but I want it to be testable, so lineItems needs to be publicly gettable.
Reading the docs, I could do this:
private(set) public var lineItems : [LineItem] = []
But then I have to write a lot of boilerplate code to recreate array methods, such as insert, removeAtIndex etc.
What is best practice here? At the moment, I don't need to do anything else on insert/removal of items, but I guess I may need to do validation or similar in future, but even so it seems redundant to have to write code that just recreates Array methods.
Would it be better just to make lineItems publicly gettable and settable? Are their circumstances where this would or wouldn't make sense?
Thanks!
Swift's Array is a (immutable) value type, which means that
var a = ["object"]
var b = [String]()
b.append("object")
b == a // true
From this point of view it does not make sense to allow modifying an array and not allow setting it - modifying is basically creating new array and assigning it to variable.
I would like to be able to annotate my types and methods with meta-data and read those at runtime.
The language reference explains how to declare attribute usages, but is it actually possible to declare your own attributes?
Reading would require some kind of reflection mechanism, which I was not able to find in the reference at all, so the second part of the question probably is - is there reflection possible. If these features are not available in Swift, can they be done with Objective-C code (but on Swift instances and types)?
A relatively unrelated note: The decision of what has been modelled as an attribute and what has been added to the core syntax strikes me as pretty arbitrary. It feels like two different teams worked on the syntax and on some attributes. E.g. they put weak and unowned into the language as modifiers, but made #final and #lazy attributes. I believe that once they actually add access modifiers, they will probably be attributes likes final. Is all of this somehow related to Objective-C interoperability?
If we take the iBook as definitive, there appears to be no developer-facing way of creating arbitrary new attributes in the way you can in Java and .NET. I hope this feature comes in later, but for now, it looks like we're out of luck. If you care about this feature, you should file an enhancement request with Apple (Component: Swift Version: X)
FWIW, there's really not a way to do this in Objective-C either.
You can now do something like this! Check out "Property Wrappers" - https://docs.swift.org/swift-book/LanguageGuide/Properties.html
Here's an example from that page:
#propertyWrapper
struct TwelveOrLess {
private var number = 0
var wrappedValue: Int {
get { return number }
set { number = min(newValue, 12) }
}
}
struct SmallRectangle {
#TwelveOrLess var height: Int
#TwelveOrLess var width: Int
}
var rectangle = SmallRectangle()
print(rectangle.height)
// Prints "0"
rectangle.height = 10
print(rectangle.height)
// Prints "10"
rectangle.height = 24
print(rectangle.height)
// Prints "12"
I have two methods like so:
Foo[] GetFoos(Type t) { //do some stuff and return an array of things of type T }
T[] GetFoos<T>()
where T : Foo
{
return GetFoos(typeof(T)) as T[];
}
However, this always seems to return null. Am I doing things wrong or is this just a shortfall of C#?
Nb:
I know I could solve this problem with:
GetFoos(typeof(T)).Cast<T>().ToArray();
However, I would prefer to do this wothout any allocations (working in an environment very sensitive to garbage collections).
Nb++:
Bonus points if you suggest an alternative non allocating solution
Edit:
This raises an interesting question. The MSDN docs here: http://msdn.microsoft.com/en-us/library/aa664572%28v=vs.71%29.aspx say that the cast will succeed if there is an implicit or explicit cast. In this case there is an explicit cast, and so the cast should succeed. Are the MSDN docs wrong?
No, C# casting isn't useless - you simply can't cast a Foo[] to a T[] where T is a more derived type, as the Foo[] could contain other elements different to T. Why don't you adjust your GetFoos method to GetFoos<T>()? A method only taking a Type object can easily be converted into a generic method, where you could create the array directly via new T[].
If this is not possible: Do you need the abilities an array offers (ie. indexing and things like Count)? If not, you can work with an IEnumerable<T> without having much of a problem. If not: you won't get around going the Cast<T>.ToArray() way.
Edit:
There is no possible cast from Foo[] to T[], the description in your link is the other way round - you could cast a T[] to a Foo[] as all T are Foo, but not all Foo are T.
If you can arrange for GetFoos to create the return array using new T[], then you win. If you used new Foo[], then the array's type is fixed at that, regardless of the types of the objects it actually holds.
I haven't tried this, but it should work:
T[] array = Array.ConvertAll<Foo, T>(input,
delegate(Foo obj)
{
return (T)obj;
});
You can find more at http://msdn.microsoft.com/en-us/library/exc45z53(v=VS.85).aspx
I think this converts in-place, so it won't be doing any re-allocations.
From what I understand from your situation, using System.Array in place of a more specific array can help you. Remember, Array is the base class for all strongly typed arrays so an Array reference can essentially store any array. You should make your (generic?) dictionary map Type -> Array so you may store any strongly typed array also while not having to worry about needing to convert one array to another, now it's just type casting.
i.e.,
Dictionary<Type, Array> myDict = ...;
Array GetFoos(Type t)
{
// do checks, blah blah blah
return myDict[t];
}
// and a generic helper
T[] GetFoos<T>() where T: Foo
{
return (T[])GetFoos(typeof(T));
}
// then accesses all need casts to the specific type
Foo[] f = (Foo[])GetFoos(typeof(Foo));
DerivedFoo[] df = (DerivedFoo[])GetFoos(typeof(DerivedFoo));
// or with the generic helper
AnotherDerivedFoo[] adf = GetFoos<AnotherDerivedFoo>();
// etc...
p.s., The MSDN link that you provide shows how arrays are covariant. That is, you may store an array of a more derived type in a reference to an array of a base type. What you're trying to achieve here is contravariance (i.e., using an array of a base type in place of an array of a more derived type) which is the other way around and what arrays can't do without doing a conversion.