Trying to figure out how to alter some preset files and wonder how to decode my values to match this encoding ... its used in a preset file of an apple music sequencer
appears as (int), hex representation (2bytes)
b1 b2 | b1,b2(i) b1b2(i) b2,b1(i) b2b1(i)
0 00 00 | 0,0 0 0,0 0
1 80 3f | 128,63 32831 63,128 16256
2 00 40 | 0,64 64 64,0 16384
3 40 40 | 64,64 16448 64,64 16448
4 80 40 | 128,64 32832 64,128 16512
5 a0 40 | 160,64 41024 64,160 16544
6 c0 40 | 192,64 49216 64,192 16576
7 e0 40 | 224,64 57408 64,224 16608
8 00 41 | 0,65 65 65,0 16640
9 10 41 | 16,65 4161 65,16 16656
10 20 41 | 32,65 8257 65,32 16672
11 30 41 | 48,65 12353 65,48 16688
20 a0 41 | 160,65 41025 65,160 16800
21 a8 41 | 168,65 43073 65,168 16808
40 20 42 | 32,66 8258 66,32 16928
50 48 42 | 72,66 18498 66,72 16968
100 c8 42 | 200,66 51266 66,200 17096
200 48 43 | 72,67 18499 67,72 17224
Can't stand the logic behind this, help much appreciated - tyia!
Related
We're trying to verify valid input for the parameter PaperSize of Set-PrintConfiguration.
We're trying to create an array with all possible accepted values for the argument:
$testCommand = Get-Command Set-PrintConfiguration
$testCommand.Parameters.PaperSize
$testPaperSie = [Microsoft.PowerShell.Cmdletization.GeneratedTypes.PrinterConfiguration.PaperSizeEnum]
$testPaperSie.DeclaredFields.Name
This does return a list of options but it also includes a value like value__ which does not seem to be suggested by intellisense. This makes me think the query for valid values is incorrect.
To get the possible values from the PaperKind enum, you can do something like this:
function Get-Enum {
param (
[type]$Type
)
if ($Type.BaseType.FullName -ne 'System.Enum') {
Write-Error "Type '$Type' is not an enum"
}
else {
[enum]::GetNames($Type) | ForEach-Object {
$exp = "([$Type]::$($_)).value__"
[PSCustomObject] #{
'Name' = $_
'Value' = Invoke-Expression -Command $exp
}
}
}
}
Get-Enum System.Drawing.Printing.PaperKind
On my machine it returns:
Name Value
---- -----
Custom 0
Letter 1
LetterSmall 2
Tabloid 3
Ledger 4
Legal 5
Statement 6
Executive 7
A3 8
A4 9
A4Small 10
A5 11
B4 12
B5 13
Folio 14
Quarto 15
Standard10x14 16
Standard11x17 17
Note 18
Number9Envelope 19
Number10Envelope 20
Number11Envelope 21
Number12Envelope 22
Number14Envelope 23
CSheet 24
DSheet 25
ESheet 26
DLEnvelope 27
C5Envelope 28
C3Envelope 29
C4Envelope 30
C6Envelope 31
C65Envelope 32
B4Envelope 33
B5Envelope 34
B6Envelope 35
ItalyEnvelope 36
MonarchEnvelope 37
PersonalEnvelope 38
USStandardFanfold 39
GermanStandardFanfold 40
GermanLegalFanfold 41
IsoB4 42
JapanesePostcard 43
Standard9x11 44
Standard10x11 45
Standard15x11 46
InviteEnvelope 47
LetterExtra 50
LegalExtra 51
TabloidExtra 52
A4Extra 53
LetterTransverse 54
A4Transverse 55
LetterExtraTransverse 56
APlus 57
BPlus 58
LetterPlus 59
A4Plus 60
A5Transverse 61
B5Transverse 62
A3Extra 63
A5Extra 64
B5Extra 65
A2 66
A3Transverse 67
A3ExtraTransverse 68
JapaneseDoublePostcard 69
A6 70
JapaneseEnvelopeKakuNumber2 71
JapaneseEnvelopeKakuNumber3 72
JapaneseEnvelopeChouNumber3 73
JapaneseEnvelopeChouNumber4 74
LetterRotated 75
A3Rotated 76
A4Rotated 77
A5Rotated 78
B4JisRotated 79
B5JisRotated 80
JapanesePostcardRotated 81
JapaneseDoublePostcardRotated 82
A6Rotated 83
JapaneseEnvelopeKakuNumber2Rotated 84
JapaneseEnvelopeKakuNumber3Rotated 85
JapaneseEnvelopeChouNumber3Rotated 86
JapaneseEnvelopeChouNumber4Rotated 87
B6Jis 88
B6JisRotated 89
Standard12x11 90
JapaneseEnvelopeYouNumber4 91
JapaneseEnvelopeYouNumber4Rotated 92
Prc16K 93
Prc32K 94
Prc32KBig 95
PrcEnvelopeNumber1 96
PrcEnvelopeNumber2 97
PrcEnvelopeNumber3 98
PrcEnvelopeNumber4 99
PrcEnvelopeNumber5 100
PrcEnvelopeNumber6 101
PrcEnvelopeNumber7 102
PrcEnvelopeNumber8 103
PrcEnvelopeNumber9 104
PrcEnvelopeNumber10 105
Prc16KRotated 106
Prc32KRotated 107
Prc32KBigRotated 108
PrcEnvelopeNumber1Rotated 109
PrcEnvelopeNumber2Rotated 110
PrcEnvelopeNumber3Rotated 111
PrcEnvelopeNumber4Rotated 112
PrcEnvelopeNumber5Rotated 113
PrcEnvelopeNumber6Rotated 114
PrcEnvelopeNumber7Rotated 115
PrcEnvelopeNumber8Rotated 116
PrcEnvelopeNumber9Rotated 117
PrcEnvelopeNumber10Rotated 118
Hope that helps
I'm trying to eliminate duplicate entries for customers in my contact list. Assume my table has three columns (FirstName, LastName, CustomerID).
Can somebody help me create a query that identifies different CustomerIDs with either the same or very similar First and Last Names? We end up with multiple entries due to sales people searching for a name and not finding it due to misspellings. They then create a new entry for the customer with a slightly different spelling of the name.
Thanks!
One approach is to manage a mapping of names to common (mis)spellings and then map all the various spellings back to the intended name. Then group them.
t:([] fn:100?(`John;`Mike;`Bob;`john;`Johnn;`Mick;`Bobby);ln:100?(`Doe;`Smith;`doe;`Do;`smith);id:til 100)
mapFN:exec similar!name from ungroup flip `name`similar!flip (
(`Bob; (`Bob;`bob;`Bobby;`bobby));
(`John; (`John;`Johnn;`john));
(`Mike; (`Mike;`mike;`Mick;`Michael))
);
mapLN:exec similar!name from ungroup flip `name`similar!flip (
(`Doe; (`Doe;`doe;`Do));
(`Smith; (`Smith;`smith;`Smyth))
);
Without mapping:
q)`fn`ln xgroup t
fn ln | id
-----------| ----------------
Mick Do | 0 25 26 50 68 71
Bobby Smith| 1 22 23 83
John Smith| 2 8 48 51 69 85
Mike Doe | 3 44
john doe | ,4
Mick Doe | 5 47 95
John Doe | 6 46 49 63
john Smith| 7 66 74
Johnn doe | 9 13 79 94
Mick doe | 10 20 55 67
Bobby smith| 11 17 18 53
john Doe | 12 21 56
...
With mapping:
q)`fn`ln xgroup update mapFN[fn],mapLN[ln] from t
fn ln | id
----------| -----------------------------------------------------------------
Mike Doe | 0 3 5 10 20 25 26 39 44 47 50 52 55 67 68 70 71 78 95 97
Bob Smith| 1 11 17 18 22 23 30 38 45 53 77 82 83
John Smith| 2 7 8 16 19 33 37 40 43 48 51 64 66 69 73 74 80 85 87
John Doe | 4 6 9 12 13 21 31 32 41 42 46 49 56 57 62 63 65 72 79 81 86 89 91
Bob Doe | 14 24 27 28 35 54 58 59 61 75 76 84
Mike Smith| 15 29 34 36 60 88 90 93 96 98
You could also do something more sophisticated with regex pattern matching.
The mapping would need to be pretty precise though as otherwise you might end up with false groupings
Today I had a few hundred items (IDs from SQL query) and needed to paste them into another query to be readable by an analyst. I needed *nix fold command. I wanted to take the 300 lines and reformat them as multiple numbers per line seperated by a space. I would have used fold -w 100 -s.
Similar tools on *nix include fmt and par.
On Windows is there an easy way to do this in PowerShell? I expected one of the *-Format commandlets to do it, but I couldn't find it. I'm using PowerShell v4.
See https://unix.stackexchange.com/questions/25173/how-can-i-wrap-text-at-a-certain-column-size
# Input Data
# simulate a set of 300 numeric IDs from 100,000 to 150,000
100001..100330 |
Out-File _sql.txt -Encoding ascii
# I want output like:
# 100001, 100002, 100003, 100004, 100005, ... 100010, 100011
# 100012, 100013, 100014, 100015, 100016, ... 100021, 100021
# each line less than 100 characters.
Depending on how big the file is you could read it all into memory, join it with spaces and then split on 100* characters or the next space
(Get-Content C:\Temp\test.txt) -join " " -split '(.{100,}?[ |$])' | Where-Object{$_}
That regex looks for 100 characters then the first space after that. That match is then -split but since the pattern is wrapped in parenthesis the match is returned instead of discarded. The Where removes the empty entries that are created in between the matches.
Small sample to prove theory
#"
134
124
1
225
234
4
34
2
42
342
5
5
2
6
"#.split("`n") -join " " -split '(.{10,}?[ |$])' | Where-Object{$_}
The above splits on 10 characters where possible. If it cannot the numbers are still preserved. Sample is based on me banging on the keyboard with my head.
134 124 1
225 234 4
34 2 42
342 5 5
2 6
You could then make this into a function to get the simplicity back that you are most likely looking for. It can get better but this isn't really the focus of the answer.
Function Get-Folded{
Param(
[string[]]$Strings,
[int]$Wrap = 50
)
$strings -join " " -split "(.{$wrap,}?[ |$])" | Where-Object{$_}
}
Again with the samples
PS C:\Users\mcameron> Get-Folded -Strings (Get-Content C:\temp\test.txt) -wrap 40
"Lorem ipsum dolor sit amet, consectetur
adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua. Ut enim
ad minim veniam, quis nostrud exercitation
... output truncated...
You can see that it was supposed to split on 40 characters but the second line is longer. It split on the next space after 40 to preserve the word.
If it's one item per line, and you want to join every 100 items onto a single line separated by a space you could put all the output into a text file then do this:
gc c:\count.txt -readcount 100 | % {$_ -join " "}
When I saw this, the first thing that came to my mind was abusing Format-Table to do this, mostly because it knows how to break the lines properly when you specify a width. After coming up with a function, it seems that the other solutions presented are shorter and probably easier to understand, but I figured I'd still go ahead and post this solution anyway:
function fold {
[CmdletBinding()]
param(
[Parameter(ValueFromPipeline)]
$InputObject,
[Alias('w')]
[int] $LineWidth = 100,
[int] $ElementWidth
)
begin {
$SB = New-Object System.Text.StringBuilder
if ($ElementWidth) {
$SBFormatter = "{0,$ElementWidth} "
}
else {
$SBFormatter = "{0} "
}
}
process {
foreach ($CurrentObject in $InputObject) {
[void] $SB.AppendFormat($SBFormatter, $CurrentObject)
}
}
end {
# Format-Table wanted some sort of an object assigned to it, so I
# picked the first static object that popped in my head:
([guid]::Empty | Format-Table -Property #{N="DoesntMatter"; E={$SB.ToString()}; Width = $LineWidth } -Wrap -HideTableHeaders |
Out-String).Trim("`r`n")
}
}
Using it gives output like this:
PS C:\> 0..99 | Get-Random -Count 100 | fold
1 73 81 47 54 41 17 87 2 55 30 91 19 50 64 70 51 29 49 46 39 20 85 69 74 43 68 82 76 22 12 35 59 92
13 3 88 6 72 67 96 31 11 26 80 58 16 60 89 62 27 36 37 18 97 90 40 65 42 15 33 24 23 99 0 32 83 14
21 8 94 48 10 4 84 78 52 28 63 7 34 86 75 71 53 5 45 66 44 57 77 56 38 79 25 93 9 61 98 95
PS C:\> 0..99 | Get-Random -Count 100 | fold -ElementWidth 2
74 89 10 42 46 99 21 80 81 82 4 60 33 45 25 57 49 9 86 84 83 44 3 77 34 40 75 50 2 18 6 66 13
64 78 51 27 71 97 48 58 0 65 36 47 19 31 79 55 56 59 15 53 69 85 26 20 73 52 68 35 93 17 5 54 95
23 92 90 96 24 22 37 91 87 7 38 39 11 41 14 62 12 32 94 29 67 98 76 70 28 30 16 1 61 88 43 8 63
72
PS C:\> 0..99 | Get-Random -Count 100 | fold -ElementWidth 2 -w 40
21 78 64 18 42 15 40 99 29 61 4 95 66
86 0 69 55 30 67 73 5 44 74 20 68 16
82 58 3 46 24 54 75 14 11 71 17 22 94
45 53 28 63 8 90 80 51 52 84 93 6 76
79 70 31 96 60 27 26 7 19 97 1 59 2
65 43 81 9 48 56 25 62 13 85 47 98 33
34 12 50 49 38 57 39 37 35 77 89 88 83
72 92 10 32 23 91 87 36 41
This is what I ended up using.
# simulate a set of 300 SQL IDs from 100,000 to 150,000
100001..100330 |
%{ "$_, " } | # I'll need this decoration in the SQL script
Out-File _sql.txt -Encoding ascii
gc .\_sql.txt -ReadCount 10 | %{ $_ -join ' ' }
Thanks everyone for the effort and the answers. I'm really surprised there wasn't a way to do this with Format-Table without the use of [guid]::Empty in Rohn Edward's answer.
My IDs are much more consistent than the example I gave, so Noah's use of gc -ReadCount is by far the simplest solution in this particular data set, but in the future I'd probably use Matt's answer or the answers linked to by Emperor in comments.
I came up with this:
$array =
(#'
1
2
3
10
11
100
101
'#).split("`n") |
foreach {$_.trim()}
$array = $array * 40
$SB = New-Object Text.StringBuilder(100,100)
foreach ($item in $array) {
Try { [void]$SB.Append("$item ") }
Catch {
$SB.ToString()
[void]$SB.Clear()
[Void]$SB.Append("$item ")
}
}
#don't forget the last line
$SB.ToString()
1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101
1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101
1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101
1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101
1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101
1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101
1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101 1 2 3 10 11 100 101
Maybe not as compact as you were hoping for, and there may be better ways to do it, but it seems to work.
I have a q table in which no. of non keyed columns is variable. Also, these column names contain an integer in their names. I want to perform some function on these columns without actually using their actual names
How can I achieve this ?
For Example:
table:
a | col10 col20 col30
1 | 2 3 4
2 | 5 7 8
// Assume that I have numbers 10, 20 ,30 obtained from column names
I want something like **update NewCol:10*col10+20*col20+30*col30 from table**
except that no.of columns is not fixed so are their inlcluded numbers
We want to use a functional update (simple example shown here: http://www.timestored.com/kdb-guides/functional-queries-dynamic-sql#functional-update)
For this particular query we want to generate the computation tree of the select clause, i.e. the last part of the functional update statement. The easiest way to do that is to parse a similar statement then recreate that format:
q)/ create our table
q)t:([] c10:1 2 3; c20:10 20 30; c30:7 8 9; c40:0.1*4 5 6)
q)t
c10 c20 c30 c40
---------------
1 10 7 0.4
2 20 8 0.5
3 30 9 0.6
q)parse "update r:(10*c10)+(20*col20)+(30*col30) from t"
!
`t
()
0b
(,`r)!,(+;(*;10;`c10);(+;(*;20;`col20);(*;30;`col30)))
q)/ notice the last value, the parse tree
q)/ we want to recreate that using code
q){(*;x;`$"c",string x)} 10
*
10
`c10
q){(+;x;y)} over {(*;x;`$"c",string x)} each 10 20
+
(*;10;`c10)
(*;20;`c20)
q)makeTree:{{(+;x;y)} over {(*;x;`$"c",string x)} each x}
/ now write as functional update
q)![t;();0b; enlist[`res]!enlist makeTree 10 20 30]
c10 c20 c30 c40 res
-------------------
1 10 7 0.4 420
2 20 8 0.5 660
3 30 9 0.6 900
q)update r:(10*c10)+(20*c20)+(30*c30) from t
c10 c20 c30 c40 r
-------------------
1 10 7 0.4 420
2 20 8 0.5 660
3 30 9 0.6 900
I think functional select (as suggested by #Ryan) is the way to go if the table is quite generic, i.e. column names might varies and number of columns is unknown.
Yet I prefer the way #JPC uses vector to solve the multiplication and summation problem, i.e. update res:sum 10 20 30*(col10;col20;col30) from table
Let combine both approach together with some extreme cases:
q)show t:1!flip(`a,`$((10?2 3 4)?\:.Q.a),'string 10?10)!enlist[til 100],0N 100#1000?10
a | vltg4 pnwz8 mifz5 pesq7 fkcx4 bnkh7 qvdl5 tl5 lr2 lrtd8
--| -------------------------------------------------------
0 | 3 3 0 7 9 5 4 0 0 0
1 | 8 4 0 4 1 6 0 6 1 7
2 | 4 7 3 0 1 0 3 3 6 4
3 | 2 4 2 3 8 2 7 3 1 7
4 | 3 9 1 8 2 1 0 2 0 2
5 | 6 1 4 5 3 0 2 6 4 2
..
q)show n:"I"$string[cols get t]inter\:.Q.n
4 8 5 7 4 7 5 5 2 8i
q)show c:cols get t
`vltg4`pnwz8`mifz5`pesq7`fkcx4`bnkh7`qvdl5`tl5`lr2`lrtd8
q)![t;();0b;enlist[`res]!enlist({sum x*y};n;enlist,c)]
a | vltg4 pnwz8 mifz5 pesq7 fkcx4 bnkh7 qvdl5 tl5 lr2 lrtd8 res
--| -----------------------------------------------------------
0 | 3 3 0 7 9 5 4 0 0 0 176
1 | 8 4 0 4 1 6 0 6 1 7 226
2 | 4 7 3 0 1 0 3 3 6 4 165
3 | 2 4 2 3 8 2 7 3 1 7 225
4 | 3 9 1 8 2 1 0 2 0 2 186
5 | 6 1 4 5 3 0 2 6 4 2 163
..
You can create a functional form query as #Ryan Hamilton indicated, and overall that will be the best approach since it is very flexible. But if you're just looking to add these up, multiplied by some weight, I'm a fan of going through other avenues.
EDIT: missed that you said the number in the columns name could vary, in which case you can easily adjust this. If the column names are all prefaced by the same number of letters, just drop those and then parse the remaining into int or what have you. Otherwise if the numbers are embedded within text, check out this other question
//Create our table with a random number of columns (up to 9 value columns) and 1 key column
q)show t:1!flip (`$"c",/:string til n)!flip -1_(n:2+first 1?10) cut neg[100]?100
c0| c1 c2 c3 c4 c5 c6 c7 c8 c9
--| --------------------------
28| 3 18 66 31 25 76 9 44 97
60| 35 63 17 15 26 22 73 7 50
74| 64 51 62 54 1 11 69 32 61
8 | 49 75 68 83 40 80 81 89 67
5 | 4 92 45 39 57 87 16 85 56
48| 88 34 55 21 12 37 53 2 41
86| 52 91 79 33 42 10 98 20 82
30| 71 59 43 58 84 14 27 90 19
72| 0 99 47 38 65 96 29 78 13
q)update res:sum (1+til -1+count cols t)*flip value t from t
c0| c1 c2 c3 c4 c5 c6 c7 c8 c9 res
--| -------------------------------
28| 3 18 66 31 25 76 9 44 97 2230
60| 35 63 17 15 26 22 73 7 50 1551
74| 64 51 62 54 1 11 69 32 61 1927
8 | 49 75 68 83 40 80 81 89 67 3297
5 | 4 92 45 39 57 87 16 85 56 2582
48| 88 34 55 21 12 37 53 2 41 1443
86| 52 91 79 33 42 10 98 20 82 2457
30| 71 59 43 58 84 14 27 90 19 2134
72| 0 99 47 38 65 96 29 78 13 2336
q)![t;();0b; enlist[`res]!enlist makeTree 1+til -1+count cols t] ~ update res:sum (1+til -1+count cols t)*flip value t from t
1b
q)\ts do[`int$1e4;![t;();0b; enlist[`res]!enlist makeTree 1+til 9]]
232 3216j
q)\ts do[`int$1e4;update nc:sum (1+til -1+count cols t)*flip value t from t]
69 2832j
I haven't tested this on a large table, so caveat emptor
Here is another solution which is also faster.
t,'([]res:(+/)("I"$(string tcols) inter\: .Q.n) *' (value t) tcols:(cols t) except keys t)
By spending some time, we can decrease the word count as well. Logic goes like this:
a:"I"$(string tcols) inter\: .Q.n
Here I am first extracting out the integers from column names and storing them in a vector. Variable 'tcols' is declared at the end of query which is nothing but columns of table except key columns.
b:(value t) tcols:(cols t) except keys t
Here I am extracting out each column vector.
c:(+/) a *' b
Multiplying each column vector(var b) by its integer(var a) and adding corresponding
values from each resulting list.
t,'([]res:c)
Finally storing result in a temp table and joining it to t.
I have two matrices with the first column being the primary key as shown below:
| 123 3 234 11 |
| 124 2 634 22 |
A = | 125 8 731 33 |
| 126 8 237 44 |
| 127 6 235 55 |
| 124 34 23 |
B = | 125 45 73 |
| 126 33 37 |
| 127 44 25 |
I want the new matrix C such that
find(A(:,2) > 5). In this case the indices that satisfy this condition 3,4
The primary key values for indices 3 and 4 in A in 125 and 126.
Find the rows having the values 125 and 126 in B which is 2,3.
Create the new matrix C concatinating the values in A and B with that primary key.
Matrix C should look like
C = | 125 8 731 33 45 73 |
| 126 8 237 44 35 37 |
How can I do this?
Thanks!
Your key function to use is ISMEMBER. Use two output indices:
[idxa, idxb] = ismember(a(:,1), b(:,1));
idxb(idxb==0) = [];
Then you can combine
c = [a(idxa,:) b(idxb,:)];
I hope you can add filters and select the columns you need by yourself.
The statistics toolbox contains a function called JOIN that basically does what you want.
http://www.mathworks.de/de/help/stats/dataset.join.html
I think it does what you are looking for. Does it?