Short background: I am currently writing a program in Xcode for the Mac, which I plan to take parts of (conceptually, if not whole chunks of the code) over to the iPhone. It involves constantly receiving data through bluetooth from a external sensor (regardless of user interaction the data must be received). I've built a simple program on the Mac using IOBluetooth that pairs and starts receiving the data just fine, and I plan on using BTstack and a jailbroken iPhone in order to access the bluetooth chip on the iPhone.
Before I get too far I want to conceptually lay this program out correctly, because I am used to procedural programming and Obj-C is a new beast for me. As I stated, I would like to be able to save as much of this code as possible when I move to the iPhone (I understand there are different classes for views etc, but I see -lots- of similarities).
1) With my program I will be constantly receiving data in the background (regardless of user actions - ie, once the user starts the program and picks the BT device, the data will flow) and I need to store and analyze that data before it can be presented to the user. So (the question), how would one lay this out? I was thinking of putting all of my BT code in the appdelegate, and then having a view controller (on the mac would just be one which handles the window, but on the iPhone would be a tab controller with multiple sub view controllers), and a model that analyzes and stores the data (also as log files, for future reference) that is accessed by the "controller", in this case the appdelegate. Does this layout make sense? Is it kosher MVC/Cocoa to put all of the BT code and analysis in appdelegate, or should it(they) be in its own class(es) (knowing the BT code on both the mac and iPhone must constantly receive bursts of data)? How could it be improved?
2) A related question on the analysis side. I haven't found a single Cocoa example on the net that has analysis (I've found programs, but no explanation of the model they use). The basic data that is saved is very small ~50kB per hour. However, the results (including spectrum and waterfall plots) could be >2MB per hour (this is a program that one might run for many hours a day). To analyze "on the go" and just throw the results in a scrolling buffer I know would be very fast, but I want my program to allow the user to look back at specific time segments in the past. The question I have is should the model object analyze the data and store the results alongside the basic data, or should the model only store the basic data, and return that data to the controller which would then analyze it to present it to the view (this would be very CPU heavy if regraphing even minutes of data, let alone hours)?
Any thoughts or suggestions would be greatly appreciated, as I feel laying proper groundwork could save me untold hours of coding (and fixed/debugging) later.
As for your question 1:
I suggest you to write a class/object which manages the bluetooth data, separately from the app delegate. The app delegate is where the view objects meet the controller, and as such there will be lots of calls to AppKit (on OS X) and to UIKit (on iOS). The change will be so great that #ifdef between the OSes inside the same file won't make much sense for the app delegate.
Rather, make an ivar holding the Bluetooth controller inside the app delegate. That way your code will be better structured, and will be easier to be reused.
As for your question 2:
On an OS X machine, which usually comes with plenty of RAM these days, holding and caching all the resulting data on the RAM would be just fine, if it's 2MB per hour.
On an iOS machine, RAM is a seriously endangered resource. If your program caches the calculated data in the memory and consumes a lot of RAM and the user send it to the background, the OS might outright kill your program instead of suspending it, for example. Then you'll need to recalculate data anyway, because your app is re-launched.
The filesystem capacity is quite big even on an iOS machine. So one way out is to write out your calculated data onto the disk, and let the view controller reload the previous calculated data from there. That way, your program can access pre-calculated data even after it's relaunched.
That cacheing code can be even shared between OS X and iOS, if you don't hard-code the cache directory into the program.
If your software on the iPhone is supposed to run continuously in the background processing data from BTstack, I recommend to create a LaunchDaemon for the data processing and provide a regular app for the configuration. (Although BTstack Mouse / Keyboard / GPS don't follow this advice, they will when I get around to update them - Celeste uses a daemon for the actual file transfers e.g.)
Related
Did a simple Google search:
https://www.google.com/search?q=advantages+of+ionic+lazy+loading
And couldn't really find a detailed description of the advantages of lazy loading. Anyone care to explain?
Long story short: (startup)-performance!
The underlying problem:
When you do a cold start of your app (no resume) the webview engine needs to load, parse and interpret a lot of javascript to become useable. The top high-end devices are mostly capable of doing so in a kind of acceptable timeframe but on hardware which is a few years old or simply not equipped with enough CPU power this may take a while.
Another problem (especially when developing PWAs) is network speed, with WiFi or 4G it is no problem (but also far from ideal!) at all to quickly download a few MBs of javascript. But on a slow 3G connection you can go and drink a coffee while waiting until your app becomes interactive.
Lazy-loading to the rescue!
So how can we minimize the effort to make the app interactive faster? We split up our heavy main bundle into many smaller bundles. So if we start our app now, only the bare minimum of javascript needed for the first page has to be fetched and parsed. Every time we need a specific feature (a page) we do the loading just in time (lazy) instead of ahead of time (eager). By always just fetching a small chunk of javascript when needed the performance gain will be huge for some devices but will definitly noticeable on every device.
If you implement lazy-loading in Ionic3 your code also becomes more modular and maintainable because you will create a self-contained Angular module for every IonicPage and by pushing a string onto the nav-stack instead of an actual page instance you remove a lot of dependencies (imports) in your code.
I need to develop the real-time application which can handle user's input (from some external control panel) as fast as possible and provide some output to LCD monitor (very fast as well).
To be more exact - I need to handle fixed-time interrupts (with period of 1ms) to recalculate internal model - with current state fetched from external control panel.
When internal model is changed i need to update a picture on LCD monitor (now I think the most proper way is to update on each interrupt). Also don't want any delays here.
What is the most suitable platform to implement it? And also which one is the most cost-effective?
I've heard about QNX, IntervalZero RTX, rtlinux but don't know the details and abilities of each one.
Thanks!
As far as the different OSs, I know QNX has very good "hard" real time and has been built & optimized from the ground up. It also now has Qt running on it (QNX 6.5) for full featured GUIness.
I have heard (2nd hand) anecdotal information that rtlinux is very close to hard realtime (guaranteed realtime), but it can sometimes be late if a driver (usually 3rd party) is not coded well. [This was from a RTOS vendor, so take it for what it is worth.]
As a design issue, I'd decouple the three separate operations into three threads with different priorities: one thread to fetch the data and set a semaphore that new data is ready, one thread to update the model and set a semaphore that the model is ready, and one thread to update the GUI. I would run the GUI thread at a much slower update rate. Most monitors are in the 60-120Hz range for updating. Why update faster than the data can be shown on the screen?
Apple just released some sample code on lazy loading images in a UITableView a week ago. I checked it out and implemented it into my own UITableView (which is a drawRect one for fast scrolling), to see if there was a difference from what I was already doing.
After implementing I am not sure what is best; the new code or what I already had. I am not seeing much of a speed improvement on my 3GS.
"Sandbox" method: Load images lazily, then save to local tmp folder in the sandbox. Each time the cell is displayed it looks for whether an image with that filename is already located in the sandbox folder. If it is, it retrieves the image and displays it, if not it continues with the download, saves it locally and then displays it. The benefit with this is that the images won't be blank the second time you open the app. They will already be downloaded and ready for displaying.
Caching method: This also loads the images lazily, however, now I include a UIImage on each object in the array that's displayed in the tableview. Instead of saving the image locally, I now download the image and put it into the array for the object. Now, instead of checking for the filename every single time, it jut check whether the UIImage != nil and uses the cached image (or downloads if nil).
A small difference is also that the caching code resizes the image before caching it to the exact size of what is displayed in the cell, whereas the image used in the sandbox code example is actually a bit larger than what it needs to display, which means it has to resize on the fly when scrolling as well. I read months ago that this could be a bit expensive to do, and I am also not sure whether it makes much of a difference in terms of then using a cached image instead of the sandbox-stored image and therefore more CPU intensive anyway (compared to what you save from caching with the caching code above).
I guess my question would be whether I should even bother with the caching code? Again, the new code won't immediately load images on a new launch, whereas the old code actually does because it's already in the sandbox. Since I am not reusing images, I have a lot of images to load (from the sandbox or cache) so I am not noticing a huge difference in speed. In fact, on my 3GS it's almost impossible to tell, in my opinion. The scrolling is not silky smooth, and I assume this is due to the large amount of images that I cannot reuse (different image for each cell). I am also wondering whether the sandbox method would get slower once there's 1000+ images in the folder, for example, eventually having it look through many more images than just 100 or so.
I hope I am making sense. I wanted to be pretty thorough with the details, and I am happy to give more details if needed.
Thanks!
If you have code that already works, and there's not a pressing problem, then don't change it.
If your scrolling actually is too slow, then perhaps you could use a mixture of ideas, and try to get the UIImage, and if it's not there, load it from the sandbox, and if it's not there, then download it.
The only good way to tell if there is any discernible difference in performance is to use profiling tools like Instruments (for measuring things like display framerate for the two techniques) or Shark (to determine hotspots in your code). There could be small differences in your exact implementation that could potentially cause significant differences between any general answer we could give and the actual performance you see in your application.
The thing that primarily concerns me with the "sandbox" method is not performance but disk space usage. Users won't appreciate you filling up their iPhone or iPod Touch with unnecessary files, especially if all the images aren't consistently used or if the set of used images changes often. Without knowing more about your application its impossible to guess how often these cached images would be loaded.
If you're testing locally on your own device, you might be on Wifi network. My recommendation would be to turn Wifi off for part of your testing to see how the two approaches perform when you have to fetch all the images over the cellular network. I would also recommend trying to find an older device (iPhone 3G or worse) because the 3GS does in fact hide potential performance issues that could be annoying for users on older devices.
I have personally used the LazyTableImages technique in my apps many times (provided it hasn't changed drastically between WWDC09 and the recent 'release') and find it to be just what I need. Caching images on disk wouldn't be an option in my case, however, and you shouldn't take my anecdote too strongly into account - profile your own code and use the results it shows.
Edit: The obvious answer is that accessing an in-memory cache is going to be faster than accessing the filesystem, but of course the final word on that is left up to profiling. If the images are already in memory, they don't need to be read from flash and parsed by UIImage. The traditional tradeoff comes into play here though - in-memory caching vs. disk space.
While it may be faster for you to store your images in-memory, you need to be very sure that you correctly handle memory warnings in your application (as you should be doing anyway!). Otherwise long period of use will lead to many, many images in your in-memory cache and trigger memory warnings and if your application is not built to handle these, at best your application will be killed by the OS due to lack of memory resources.
There are pros and cons in both approaches that you present - I suggest using elements of both in your app.
It's better to keep your images in memory and save them later (perhaps when your app quits). If you have a lot of images, it might be faster to use Core Data to save them, than as regular files.
It's also better to avoid doing any resizing on the fly, i.e. in your tableView:cellForRowAtIndexPath: or tableView:willDisplayCell:forRowAtIndexPath: methods or in any method that has to do with drawing your cells' content view. If you can, ask the image provider (content management?) to supply images at the size that your table view displays.
I have a simple Core Data application, with a table view and a drill down view. When I build and run in the simulator, all of the data in my database loads and the table view/drill down views function properly.
When I build and deploy to a device, my application only loads a small subset of the data (a few sections, A - C). I have no warnings nor build errors. Has anyone run into this problem? Any suggestions?
While I have not run into that issue, I would debug this using log statements. Put in logs through your loop, at the end of the NSURLConnection (assuming you are getting the data from the net) and see what is coming down, etc.
There is no reason for the default to be limiting the data and contrary to some other advice, you have at least 20mb of RAM to use even on the first device. Plenty of room to load a ton of data. Spit out the data stream to the console from the phone and see what you are getting. That is where I would look first.
I think what #theband might be talking about is the fetchLimit property of NSFetchRequest (and it's corresponding property fetchOffset). With these you can have more control over the fetching, and if you have a considerable amount of data, you will need to do that since you can't really plan on having more than about 8MB of RAM for your app on most devices.
The problem is with the device as sometimes it does not load the entire data. The solution would be checking on the limitation of data.
To be clear, this is for a normal iPhone application, and not a game.
I've read around the web a few times some developers mentioning that they were working hard to improve/reduce the startup time of their applications, but never with any good background information on how to do so.
So the question is simple: how can you reduce the startup of iPhone applications?
Same as any other performance issue: Use Shark and/or Instruments to identify bottlenecks in your code, and then focus on how you can speed things up there. Each tool will give you a picture of how much time was spent in what parts of your code, so the general scheme would be to run the tool while you start the app and then pore over the data to see where the performance hits occur.
At app startup time, the most likely candidates for improvement will be deferring data loading until later on when it's actually needed, variously described as "on demand" or "lazy" loading. Essentially, don't load any data at app startup unless it's actually needed right away when the app loads. In practice, lots of stuff that may be needed at some point doesn't have to be immediately available when the app starts. For example, if you have a database of N records but only one is visible at a time, don't load all N into memory at app startup time. Load whatever the current record is and then load the others when you actually need them.
James Thomson did a nice blog post documenting his efforts to make PCalc launch faster.
Of particular interest is his use of image with screenshot from last app run, to pull the same trick Default.png does, while loading rest of the app.