Kendo Grid Decimal Column Sorting - mvvm

I want sort the kendo grid column. In the below screen shot i need sort the WBS column either ascending or descending. In that column have that values like 1.1, 1.1.1, 1.2.1, 1.2.1.1, 1.2.1.2, 1.3, 2, 2.1, 2.1.1, 2.2 etc.,

Basically you need to define an ad-hoc compare function for WBS column. This is possible using columns.sortable.compare.
Now, the question is inventing a function that is able to compare two values as you need.
It comes to my mind doing the following:
Split the original value separated by "." in an array of values. Ex: "10.1.2" becomes [ "10", "1", "2" ].
Compare one by one the values of the arrays keeping in mind that one might be shorter than the other which means that is smaller (i.e. 10.1 is smaller 10.1.1).
This would be something like:
columns : [
...
{
field: "wbs",
width: 150,
title: "WBS",
sortable: {
compare: function (a, b) {
var wbs1 = a.wbs.split(".");
var wbs2 = b.wbs.split(".");
var idx = 0;
while (wbs1.length > idx && wbs2.length > idx) {
if (+wbs1[idx] > +wbs2[idx]) return true;
if (+wbs1[idx] < +wbs2[idx]) return false;
idx++;
}
return (wbs1.length > wbs2.length);
}
}
}
...
]
NOTE: As I said before and as result of the split, what I get in the arrays are strings. So it is pretty important that we use a little trick for comparing the values as number that is prepend a +. So when I do if (+wbs1[idx] > +wbs2[idx]) I'm actually comparing the values on wbs1[idx] and wbs2[idx] as numbers.
You can see it here: http://jsfiddle.net/OnaBai/H4U7D/2/

Related

Equivalent functionality of Matlab sortrows() in MathNET.Numerics?

Is there a MathNET.Numerics equivalent of Matlab’s sortrows(A, column), where A is a Matrix<double>?
To recall Matlab's documentation:
B = sortrows(A,column) sorts A based on the columns specified in the
vector column. For example, sortrows(A,4) sorts the rows of A in
ascending order based on the elements in the fourth column.
sortrows(A,[4 6]) first sorts the rows of A based on the elements in
the fourth column, then based on the elements in the sixth column to
break ties.
Similar to my answer to your other question, there's nothing inbuilt but you could use Linq's OrderBy() method on an Enumerable of the matrix's rows. Given a Matrix<double> x,
x.EnumerateRows() returns an Enumerable<Vector<double>> of the matrix's rows. You can then sort this enumerable by the first element of each row (if that's what you want).
In C#,
var y = Matrix<double>.Build.DenseOfRows(x.EnumerateRows().OrderBy(row => row[0]));
Example
Writing this as an extension method:
public static Matrix<double> SortRows(this Matrix<double> x, int sortByColumn = 0, bool desc = false) {
if (desc)
return Matrix<double>.Build.DenseOfRows(x.EnumerateRows().OrderByDescending(row => row[sortByColumn]));
else
return Matrix<double>.Build.DenseOfRows(x.EnumerateRows().OrderBy(row => row[sortByColumn]));
}
which you can then call like so:
var y = x.SortRows(0); // Sort by first column
Here's a big fiddle containing everything

In swift, is there a way to only check part of an array in a for loop (with a set beginning and ending point)

So lets say we have an array a = [20,50,100,200,500,1000]
Generally speaking we could do for number in a { print(a) } if we wanted to check the entirety of a.
How can you limit what indexes are checked? As in have a set beginning and end index (b, and e respectively), and limit the values of number that are checked to between b and e?
For an example, in a, if b is set to 1, and e is set to 4, then only a1 through a[4] are checked.
I tried doing for number in a[b...e] { print(number) }, I also saw here someone do this,
for j in 0..<n { x[i] = x[j]}, which works if we want just a ending.
This makes me think I can do something like for number in b..<=e { print(a[number]) }
Is this correct?
I'm practicing data structures in Swift and this is one of the things I've been struggling with. Would really appreciate an explanation!
Using b..<=e is not the correct syntax. You need to use Closed Range Operator ... instead, i.e.
for number in b...e {
print(a[number])
}
And since you've already tried
for number in a[b...e] {
print(number)
}
There is nothing wrong with the above syntax as well. You can use it either way.
An array has a subscript that accepts a Range: array[range] and returns a sub-array.
A range of integers can be defined as either b...e or b..<e (There are other ways as well), but not b..<=e
A range itself is a sequence (something that supports a for-in loop)
So you can either do
for index in b...e {
print(a[index])
}
or
for number in a[b...e] {
print(number)
}
In both cases, it is on you to ensure that b...e are valid indices into the array.

Better way to find sums in a grid in Swift

I have an app with a 6x7 grid that lets the user input values. After each value is obtained the app checks to find if any of the consecutive values create a sum of ten and executes further code (which I have working well for the 4 test cases I've written). So far I've been writing if statements similar to the below:
func findTens() {
if (rowOneColumnOnePlaceHolderValue + rowOneColumnTwoPlaceHolderValue) == 10 {
//code to execute
} else if (rowOneColumnOnePlaceHolderValue + rowOneColumnTwoPlaceHolderValue + rowOneColumnThreePlaceHolderValue) == 10 {
//code to execute
} else if (rowOneColumnOnePlaceHolderValue + rowOneColumnTwoPlaceHolderValue + rowOneColumnThreePlaceHolderValue + rowOneColumnFourPlaceHolderValue) == 10 {
//code to execute
} else if (rowOneColumnOnePlaceHolderValue + rowOneColumnTwoPlaceHolderValue + rowOneColumnThreePlaceHolderValue + rowOneColumnFourPlaceHolderValue + rowOneColumnFivePlaceHolderValue) == 10 {
//code to execute
}
That's not quite halfway through row one, and it will end up being a very large set of if statements (231 if I'm calculating correctly, since a single 7 column row would be 1,2-1,2,3-...-2,3-2,3,4-...-67 so 21 possibilities per row). I think there must be a more concise way of doing it but I've struggled to find something better.
I've thought about using an array of each of the rowXColumnYPlaceHolderValue variables similar to the below:
let rowOnePlaceHolderArray = [rowOneColumnOnePlaceHolderValue, rowOneColumnTwoPlaceHolderValue, rowOneColumnThreePlaceHolderValue, rowOneColumnFourPlaceHolderValue, rowOneColumnFivePlaceHolderValue, rowOneColumnSixPlaceHolderValue, rowOneColumnSevenPlaceHolderValue]
for row in rowOnePlaceHolderArray {
//compare each element of the array here, 126 comparisons
}
But I'm struggling to find a next step to that approach, in addition to the fact that those array elements then apparently because copies and not references to the original array anymore...
I've been lucky enough to find some fairly clever solutions to some of the other issues I've come across for the app, but this one has given me trouble for about a week now so I wanted to ask for help to see what ideas I might be missing. It's possible that there will not be another approach that is significantly better than the 231 if statement approach, which will be ok. Thank you in advance!
Here's an idea (off the top of my head; I have not bothered to optimize). I'll assume that your goal is:
Given an array of Int, find the first consecutive elements that sum to a given Int total.
Your use of "10" as a target total is just a special case of that.
So I'll look for consecutive elements that sum to a given total, and if I find them, I'll return their range within the original array. If I don't find any, I'll return nil.
Here we go:
extension Array where Element == Int {
func rangeOfSum(_ sum: Int) -> Range<Int>? {
newstart:
for start in 0..<count-1 {
let slice = dropFirst(start)
for n in 2...slice.count {
let total = slice.prefix(n).reduce(0,+)
if total == sum {
return start..<(start+n)
}
if total > sum {
continue newstart
}
if n == slice.count && total < sum {
return nil
}
}
}
return nil
}
}
Examples:
[1, 8, 6, 2, 8, 4].rangeOfSum(10) // 3..<5, i.e. 2,8
[1, 8, 1, 2, 8, 4].rangeOfSum(10) // 0..<3, i.e. 1,8,1
[1, 8, 3, 2, 9, 4].rangeOfSum(10) // nil
Okay, so now that we've got that, extracting each possible row or column from the grid (or whatever the purpose of the game is) is left as an exercise for the reader. 🙂

How to draw box plot of columns of a table using dc.js

I have a table as follows:
The number of experiments are arbitrary but the column name's prefix is "client_" following by the client number.
I want to draw a box plot of values against the "client_#" using dc.js. The table is a csv file which is loaded using d3.csv().
There are examples using ordinary groups, however I need each column to be displayed as its own boxplot and none of the examples do this. How can I create a boxplot from each column?
This is very similar to this question:
dc.js - how to create a row chart from multiple columns
Many of the same caveats apply - it will not be possible to filter (brush) using this chart, since every row contributes to every box plot.
The difference is that we will need all the individual values, not just the sum total.
I didn't have an example to test with, but hopefully this code will work:
function column_values(dim, cols) {
var _groupAll = dim.groupAll().reduce(
function(p, v) { // add
cols.forEach(function(c) {
p[c].splice(d3.bisectLeft(p[c], v[c]), 0, v[c]);
});
return p;
},
function(p, v) { // remove
cols.forEach(function(c) {
p[c].splice(d3.bisectLeft(p[c], v[c]), 1);
});
return p;
},
function() { // init
var p = {};
cols.forEach(function(c) {
p[c] = [];
});
return p;
});
return {
all: function() {
// or _.pairs, anything to turn the object into an array
return d3.map(_groupAll.value()).entries();
}
};
}
As with the row chart question, we'll need to group all the data in one bin using groupAll - ordinary crossfilter bins won't work since every row contributes to every bin.
The init function creates an object which will be keyed by column name. Each entry is an array of the values in that column.
The add function goes through all the columns and inserts each column's value into each array in sorted order.
The remove function finds the value using binary search and removes it.
When .all() is called, the {key,value} pairs will be built from the object.
The column_values function takes either a dimension or a crossfilter object for the first parameter, and an array of column names for the second parameter. It returns a fake group with a bin for each client, where the key is the client name and the value is all of the values for that client in sorted order.
You can use column_values like this:
var boxplotColumnsGroup = column_values(cf, ['client_1', 'client_2', 'client_3', 'client_4']);
boxPlot
.dimension({}) // no valid dimension as explained in earlier question
.group(boxplotColumnsGroup);
If this does not work, please attach an example so we can debug this together.

Swift 3d Array creating

I want to create an array that is 3d. Array will be 5*5*infinite (the last or the innermost array will probably has like 3-5 object of type String).
I tried something like this:
var array3D = [[[String]]]()
and tried to add new string like this
array3D[ii][yy] += y.components(separatedBy: ";")
But had problems adding new arrays or strings to that. And got error of exc bad instruction
In school I have 5 lessons per day. So in week there is 25 lessons, and I want to make an iPhone app that represent my timetable.
A multidimensional array is just an array of arrays. When you say
var array3D = [[[String]]]()
What you've created is one empty array, which expects the values you add to it to be of type [[String]]. But those inner arrays don't exist yet (much less the inner inner arrays inside them). You can't assign to array3D[ii] because ii is beyond the bounds of your empty array, you can't assign to array3D[ii][yy] because array3D[ii] doesn't exist, and you can't append to an array in array3D[ii][yy] because array3D[ii][yy] doesn't exist.
Because your outer and middle levels of array are fixed in size (5 by 5 by n), you could easily create them using Array.init(count:repeatedValue:):
let array3D = [[[String]]](count: 5, repeatedValue:
[[String]](count: 5, repeatedValue: []))
(Here the innermost arrays are empty, since it appears you plan to always fill the by appending.)
However, it also looks like your use case expects a rather sparse data structure. Why create all those arrays if most of them will be empty? You might be better served by a data structure that lets you decouple the way you index elements from the way they're stored.
For example, you could use a Dictionary where each key is some value that expresses a (day, lesson, index) combination. That way the (day, lesson, index) combinations that don't have anything don't need to be accounted for. You can even wrap a nice abstraction around your encoding of combos into keys by defining your own type and giving it a subscript operator (as alluded to in appzYourLife's answer).
I slightly modified the Matrix struct provided by The Swift Programming Language
struct Matrix<ElementType> {
let rows: Int, columns: Int
var grid: [[ElementType]]
init(rows: Int, columns: Int) {
self.rows = rows
self.columns = columns
grid = Array(count: rows * columns, repeatedValue: [])
}
func indexIsValidForRow(row: Int, column: Int) -> Bool {
return row >= 0 && row < rows && column >= 0 && column < columns
}
subscript(row: Int, column: Int) -> [ElementType] {
get {
assert(indexIsValidForRow(row, column: column), "Index out of range")
return grid[(row * columns) + column]
}
set {
assert(indexIsValidForRow(row, column: column), "Index out of range")
grid[(row * columns) + column] = newValue
}
}
}
Usage
Now you can write
var data = Matrix<String>(rows: 5, columns: 5)
data[0, 0] = ["Hello", "World"]
data[0, 0][0] // Hello