Before Swift, a MIDIMetaEvent's data was accessed via data[0], data[1], etc.
To get to a time signature I need two values from the data part which is labeled as (UInt8)... with the parens.
But when I try to get the value in this way:
let midiMessage = UnsafePointer<MIDIMetaEvent>(eventData).memory
let data1 = midiMessage.data[0]
This results in an error: "Cannot subscript a value of type 'UInt8' with an index of type 'Int'
Any clue what I've done wrong here? Just getting midiMessage.data only returns the first byte of data.
No snark intended, but file a Radar and ask for an "enhancement" to get rid of using tuples for dynamically sized arrays. They have done this already with some parts of Core MIDI, but not meta events. Or MIDI thru.
It would be nice if they just added the Core MIDI functionality to AVFoundation (they've started) to get rid of the C API altogether.
In the meantime you can go through contortions like this which uses Mirror https://github.com/jverkoey/swift-midi/blob/master/LUMI/CoreMIDI/MIDIPacket%2BSequenceType.swift
Related
this the screenshot of the error that i found
The compiler can't see what type data is, so it considers it as type Object. Object doesn't have a [] operator. Assuming that data is actually a Map you can try to replace
var data = doc.data();
with
Map? data = doc.data() as Map?;
Although there might be a nicer way to write this, but that's hard to know without seeing more of your code. Usually you can indicate the type already somewhere above it by using <Map> at the right place
I am currently working on a script where within a function, key-value pairs are being added to a dictionary x - consider x as a single dictionary of different inputs used to query data, and different key-values are appended to this depending on certain conditions being fulfilled.
However, when I load in the script into my session with some new assignment logic added, I am hitting a 'constants error. This is despite all assignments being kept to this dictionary x. When these two new assignments within x are commented out, the script will load in successfully.
I know the 'constants error usually refers to the max number of constants within a certain scope being exceeded, but surely this shouldn't be happening when all assignment is happening within this dictionary x. Is there a way to get around this? What is causing this issue?
I think you are trying to do too much in one function. I think you are indexing or assigning values to the dictionary with too many constants. Below code will return the constants error:
dict:(10 + til 100)!til 100
value (raze -1_"{","dict[",/:(string[10+til 97],\:"];")),"}"
// with til 96
{dict[10];dict[11] ... dict[104]}
It's the code that is indexing the dictionary is causing the issue rather than the dictionary itself.
I'm working with scalax to generate a graph of my Spark operationS. So, I have a custom library that generates my graph. So, let me show a sample:
val DAGWithoutGet = createGraphFromOps(ops)
val DAGWithGet = createGraphFromOps(ops).get
The return type of DAGWithoutGet is
scala.util.Try[scalax.collection.Graph[typeA, scalax.collection.GraphEdge.DiEdge]],
and, for DAGWithGet is
scalax.collection.Graph[typeA, scalax.collection.GraphEdge.DiEdge].
Here, typeA is a project related class representing a single Spark operation, not relevant for the context of this question. (for context only: What my custom library does is, essentially, generate a map of dependencies between those operations, creating a big Map object, and calling Graph(myBigMap: _*) to generate the graph).
As far as I know, calling the .get command on this point of my code or later should not make any difference, but that is not what I'm seeing.
Calling DAGWithoutGet.get.nodes has a return type of scalax.collection.Graph[typeA,DiEdge]#NodeSetT,
while calling DAGWithGet.nodes returns DAGWithGet.NodeSetT.
When I extract one of those nodes (using the .find method), I receive scalax.collection.Graph[typeA,DiEdge]#NodeT and DAGWithGet.NodeT types, respectively. Much to my dismay, even the methods available in each case are different - I cannot use pathTo (which happens to be what I want) or withSubgraph on the former, only on the latter.
My doubt is, then, after this relatively complex example: what is going on here? Why extracting the value from the Try construct on different moments leads to different types, one path dependent, and the other not - or, if that isn't correct, what may I be missing here?
I have used xCode 6.3's convertor to convert my project to swift 1.2,
After that i was still left with many errors, but i fixed them all manually.
Now when i compile i get:
<unknown>:0: error: '[Set<T>]' is not convertible to 'Hashable'.
The only place i use Set is:
var productID:Set<NSObject> = [subscriptionId]
var productsRequest:SKProductsRequest = SKProductsRequest(productIdentifiers: productID )
I have tried cleaning the project and also tried deleting the DerivedData folder, but that didn't help.
I have searched but i couldn't find anyone with the same problem.
Anyone know how to solve this?
This won’t be a problem with derived data. It looks like where previously you had an NSArray (probably of NSSet), you now have an Array of Set. Presumably you’re then trying to do something like use that value to key a dictionary type. In 6.3, several API calls that previously returned NSSomething now return native Swift types.
Swift Arrays aren’t hashable (because they might contain something that isn’t hashable). NSArrays are (though not always in a helpful way, depending on what they contain, so be wary).
Bear in mind, with type inference, that your explicit use of Set or Array won’t be the only places you might have a set. If you call a function that returns an array of sets, and you assign that value like so: let thing = funcThatReturnsArrayOfSets() then you will have a [Set<whatever>] even without explicitly writing that type in your code.
To fix this, you need to find the line where you’re getting the error, look at the types involved, then trace back to where those variables were declared. Option-click all the things to see what types they are.
In an class header I have seen something like this:
enum {
kAudioSessionProperty_PreferredHardwareSampleRate = 'hwsr', // Float64
kAudioSessionProperty_PreferredHardwareIOBufferDuration = 'iobd' // Float32
};
Now I wonder what data type such an kAudioSessionProperty_PreferredHardwareSampleRate actually is?
I mean this looks like plain old C, but in Objective-C I would write #"hwsr" if I wanted to make it a string.
I want to pass such an "constant" or "enum thing" as argument to an method.
This converts to an UInt32 enum value using the ASCII value of each of the entries. This style have been around for a long time in Mac OS headers.
'hwsr' has the same value as if you had written 0x68777372, but is a lot more reader friendly. If you used the #"hwsr" style instead you would need more than 4 bytes to represent the same.
The advantage of using this style is that you are actually able to quickly identify the content of a raw data stream if you can see the ASCII values of it.