libspotify on iOS or MacOS: parts of the SPSession fail to be retrieved - iphone

I don't know if any of you has already been playing with the recently available API for spotify but there is something that is bugging me.
Once you get passed the -(void)sessionDidLoginSuccessfully:(SPSession *)aSession callback, pretty much no information is the SPSession object.
But a bit of code inspection on the CocoaLibSpotify this seems actually normal, the data is retrieved later on.
The problem is that, it seems like of this information is actually never retrieved. I've followed a similar behavior as their "Guess the Intro" example and if I do:
- (void)sessionDidLoginSuccessfully:(SPSession *)aSession
{
// trying to fetch another piece of info about the user
userTopList = [[SPToplist toplistForCurrentUserInSession:session] retain];
[self waitForReadiness];
}
- (void)waitForReadiness
{
// Event after 10 seconds userPlaylists is still nil
if (![[[SPSession sharedSession] userPlaylists] isLoaded])
{
playlistsAttempts++;
if (playlistsAttempts < 10)
{
[self performSelector:_cmd withObject:nil afterDelay:1.0];
return;
}
}
// However, after only 1 second, userTopList is fetched
if (userTopList.isLoaded )
{ /* do stuff */ }
}
Basically the userTopList is correctly set after less than a second while the main session userPlaylists keeps being nil.
On the given example, the same thing is happening.
So I'm starting to think that the lib is just not quite there yet, but I would gladly take your inputs.

I was having the same problem and found that the following patch sorted my problem:
https://github.com/spotify/cocoalibspotify/commit/2c9b85e306a8849675e5b30169481d82dbeb34f5
Hope this helps.
-Dx

Related

FreeRTOS ignores osDelay [STM32]

I am new here, but have often benefited from questions and their results.
Now I have a problem that I can not solve for days. It is about a STM32L431RCT6 with FreeRTOS. There are 3 tasks running on it. Two of them are dedicated to processing CanOpenNode (1 to send data, 1 to receive). One is a custom controller. The code works as it is, but when I enable the CRC unit (MX_CRC_Init()) the problem appears. With the CRC unit enabled, only the task to receive the data from Canopen is executed. The other tasks are set to "Ready". What I notice is that the osDelay() used in the receive task seems to be ignored. It doesn't seem to matter if I use osDelay(1) or osDelay(10000).
void start_CO_rec_Thread(void *argument)
{
/* USER CODE BEGIN start_CO_TI_Thread */
/* Infinite loop */
for(;;)
{
canopen_app_interrupt();
osDelay(1);
}
osThreadTerminate(NULL);
/* USER CODE END start_CO_TI_Thread */
}
Something else is added. If I remove functions and variables in the task by starting an own controller, the processing of the tasks works again. Now I have concluded that the tasks need more stack memory. But this is also not the solution of the puzzle...
Working Code:
void Start_Controller(void *argument)
{
Sensor = new DS18B20(htim6);
Regler = new Smithpredictor(Sensor);
// osTimerStart(LifetimerHandle, 1000);
for(;;)
{
if (global_state==4)
actual_Temperatur = Regler->run(target_Temperature);
osDelay(MBC_intervall_s*1000);
}
osThreadTerminate(NULL);
}
Not Working Code:
void Start_Controller(void *argument)
{
Sensor = new DS18B20(htim6);
Regler = new Smithpredictor(Sensor);
osTimerStart(LifetimerHandle, 1000);
for(;;)
{
if (global_state==4)
actual_Temperatur = Regler->run(target_Temperature);
osDelay(MBC_intervall_s*1000);
}
osThreadTerminate(NULL);
}
I hope someone has an idea how to proceed or can help me. Even if it is just another way to further identify the problem. Maybe I have also committed some stupidity, I am unfortunately not a computer scientist.

How can I know my Stopwatch has run?

I use several stopwatches in my application. They are all created together, but only some of them have actually run (due to exceptions earlier in the code or other things).
After my application has run, I'm creating my report using those stopwatches. For instance, I'm doing the following:
Stopwatch subStopwatch = Stopwatch.createUnstarted();
Stopwatch mainStopwatch = Stopwatch.createStarted();
try {
// do something 1
subStopwatch.start();
// do something 2
subStopwatch.stop();
} finally {
mainStopwatch.stop();
System.out.printf("Total run time: %s%n", mainStopwatch);
if (!subStopwatch.isRunning()) {
System.out.printf(" including sub run time: %s%n", subStopwatch);
}
}
The problem in this code is that if something happens in "do something 1" (return, exception), subStopwatch will be printed anyways.
The following solutions work:
- Using a boolean to indicate I started the stopwatch.
- Using a stopwatch more locally and using a report mechanism that contains the information I'm looking for.
But the main question remains: can I know that a stopwatch has run using Stopwatch only.
You can check the elapsed time on the stopwatch:
if (subStopwatch.elapsed(TimeUnit.NANOSECONDS) > 0) {
// it ran
}

NSManagedObjectContext performBlock and dispatch_group_t

The problem is I need to modify (update/create/delete) from 0 to 10000 NSManagedObject's subclasses. Of course if it's <= 1000 everything works fine. I'm using this code:
+ (void)saveDataInBackgroundWithBlock:(void (^)(NSManagedObjectContext *))saveBlock completion:(void (^)(void))completion {
NSManagedObjectContext *tempContext = [self newMergableBackgroundThreadContext];
[tempContext performBlock:^{
if (saveBlock) {
saveBlock(tempContext);
}
if ([tempContext hasChanges]) {
[tempContext saveWithCompletion:completion];
} else {
dispatch_async(dispatch_get_main_queue(), ^{
if (completion) {
completion();
}
});
}
}];
}
- (void)saveWithCompletion:(void(^)(void))completion {
[self performBlock:^{
NSError *error = nil;
if ([self save:&error]) {
NSNumber *contextID = [self.userInfo objectForKey:#"contextID"];
if (contextID.integerValue == VKCoreDataManagedObjectContextIDMainThread) {
dispatch_async(dispatch_get_main_queue(), ^{
if (completion) {
completion();
}
});
}
[[self class] logContextSaved:self];
if (self.parentContext) {
[self.parentContext saveWithCompletion:completion];
}
} else {
[VKCoreData handleError:error];
dispatch_async(dispatch_get_main_queue(), ^{
if (completion) {
completion();
}
});
}
}];
}
completion will be fired only when main-thread context will be saved. This solution works just perfect, but
When I get more than 1000 entities from server I would like to parallel objects processing, cause update operation takes too much time (for example, 4500 updating about 90 seconds and less than 1/3 of this time takes JSON receiving process, so about 60 seconds I just drilling NSManagedObjects). Without CoreData it's pretty easy by using dispatch_group_t to divide data into subarray and process it in different threads at the same time, but... is somebody knows how to make something similar with CoreData and NSManagedObjectContexts? Is it possible to working with NSManagedObjectContext with NSPrivateQueueConcurrencyType (iOS 5 style) without performBlock: ? And what is the best way to save and merge about 10 contexts? Thanks!
By your description, it appears you are grasping at straws to recover performance.
Core Data file I/O performance is dominated by the single threaded nature of SQLite. Having multiple contexts beating on the same store coordinator is not going to make things go faster.
To improve performance, you need to do things differently. For example, you could batch your background writes into larger operations. (How? You need to do more in each GCD block before the save.) You can use Core Data's debugging tools to see what kind of SQL is being emitted by your fetches and saves. (There are lots of ways to improve CD fetch performance, fewer to improve saving.)
ok people, after I finish implementing all I want I discovered the following:
dispatch_group_t with different PrivateQueues and NSManagedObjectContexts results:
format is "number of entities/secs":
333 /6
1447/27
3982/77
Single background thread (NSManagedObjectContext + NSPrivateQueueConcurrencyType + performBlock:)
333 /1
1447/8
3982/47
So think I shouldn't try it again, also there is a lot of another issues like app freezes while merging a great number of context (even in background). I will try something else to improve performance.
You can create multiple contexts and process a slice of your data on each one...?

is it possible to determine the level of memory warnings?

I am receiving memory warnings in didReceiveMemoryWarning. I know memory warnings have different levels like level-1,level-2. Is there any way determine the warning level? Example:
if(warning level == 1)
<blah>
Hope this helps!!!
There are 4 levels of warnings (0 to 3). These are set from the kernel memory watcher, and can be obtained by the not-so-public function OSMemoryNotificationCurrentLevel().
typedef enum {
OSMemoryNotificationLevelAny = -1,
OSMemoryNotificationLevelNormal = 0,
OSMemoryNotificationLevelWarning = 1,
OSMemoryNotificationLevelUrgent = 2,
OSMemoryNotificationLevelCritical = 3
} OSMemoryNotificationLevel;
How the levels are triggered is not documented. SpringBoard is configured to do the following in each memory level:
Warning (not-normal) — Relaunch, or delay auto relaunch of nonessential background apps e.g. Mail.
Urgent — Quit all background apps, e.g. Safari and iPod.
Critical and beyond — The kernel will take over, probably killing SpringBoard or even reboot.
I know there is no way to (except the private/undocumented API) know the memory level warning. So you should not use that.
Check out this question to see undocumented API to get memory warning level.
My first advice would be to research the memory warning notification in the docs (e.g., what are the contents of its userInfo dictionary, if present). I don't know if it provides any details or not.
But ultimately, you shouldn't speculate on the level of the memory warning, just assume the worst and release as much unused data as you can.
There is no (public, working) way to get the current memory pressure level from the system on a customer device. There is however a way to get notified of memory pressure changes using the Dispatch Source API.
Memory pressure dispatch sources can be used to notify an application of changes to memory pressure. This can be more fine-grained than the notifications provided by UIKit and includes the capability to be notified when memory pressure returns to normal.
For example:
Objective-C:
dispatch_source_t memorySource = NULL;
memorySource = dispatch_source_create(DISPATCH_SOURCE_TYPE_MEMORYPRESSURE, 0L, (DISPATCH_MEMORYPRESSURE_NORMAL | DISPATCH_MEMORYPRESSURE_WARN | DISPATCH_MEMORYPRESSURE_CRITICAL), [self privateQueue]);
if (memorySource != NULL) {
dispatch_block_t eventHandler = dispatch_block_create(DISPATCH_BLOCK_ASSIGN_CURRENT, ^{
if (dispatch_source_testcancel(memorySource) == 0 ){
dispatch_source_memorypressure_flags_t memoryPressure = dispatch_source_get_data(memorySource);
[self didReceiveMemoryPressure:memoryPressure];
}
});
dispatch_source_set_event_handler(memorySource, eventHandler);
dispatch_source_set_registration_handler(memorySource, eventHandler);
[self setSource:memorySource];
dispatch_activate([self source]);
}
Swift 4:
if let source:DispatchSourceMemoryPressure = DispatchSource.makeMemoryPressureSource(eventMask: .all, queue:self.privateQueue) as? DispatchSource {
let eventHandler: DispatchSourceProtocol.DispatchSourceHandler = {
let event:DispatchSource.MemoryPressureEvent = source.data
if source.isCancelled == false {
self.didReceive(memoryPressureEvent: event)
}
}
source.setEventHandler(handler:eventHandler)
source.setRegistrationHandler(handler:eventHandler)
self.source = source
self.source?.activate()
}
Note that the event handler is also being used as the "registration handler". This will cause the event handler to fire when the dispatch source is activated, effectively telling the application of what the "current" value is when the source is activated.

Why would alSourceUnqueueBuffers fail with INVALID_OPERATION

Here's the code:
ALint cProcessedBuffers = 0;
ALenum alError = AL_NO_ERROR;
alGetSourcei(m_OpenALSourceId, AL_BUFFERS_PROCESSED, &cProcessedBuffers);
if((alError = alGetError()) != AL_NO_ERROR)
{
throw "AudioClip::ProcessPlayedBuffers - error returned from alGetSroucei()";
}
alError = AL_NO_ERROR;
if (cProcessedBuffers > 0)
{
alSourceUnqueueBuffers(m_OpenALSourceId, cProcessedBuffers, arrBuffers);
if((alError = alGetError()) != AL_NO_ERROR)
{
throw "AudioClip::ProcessPlayedBuffers - error returned from alSourceUnqueueBuffers()";
}
}
The call to alGetSourcei returns with cProcessedBuffers > 0, but the following call to alSourceUnqueueBuffers fails with an INVALID_OPERATION. This in an erratic error that does not always occur. The program containing this sample code is a single-threaded app running in a tight loop (typically would be sync'ed with a display loop, but in this case I'm not using a timed callback of any sort).
Try alSourceStop(m_OpenALSourceId) first.
Then alUnqueueBuffers(), and after that, Restart playing by alSourcePlay(m_OpenALSourceId).
I solved the same problem by this way. But I don't know why have to do so in
Mentioned in this SO thread,
If you have AL_LOOPING enabled on a streaming source the unqueue operation will fail.
The looping flag has some sort of lock on the buffers when enabled. The answer by #MyMiracle hints at this as well, stopping the sound releases that hold, but it's not necessary..
AL_LOOPING is not meant to be set on a streaming source, as you manage the source data in the queue. Keep queuing, it will keep playing. Queue from the beginning of the data, it will loop.