I am trying to create a UWP app with dbStorage feature.
I looked at the UWP-Howto : https://msdn.microsoft.com/en-us/windows/uwp/data-access/index, and see that there are two ways of doing this.
Either using the basic sqlite3 APIs (I dont wish to use any 3rd party wrapper libraries)
Integrating Entity Framework with the App.
The latter option does seem to be a better option, since it is a ORM, but I am curious to know, adding it does it shoot up the App size in comparison to using the plain sqlite extension? Has anyone using it faced such problems?
App size should be one of the last things to worry about when creating UWP apps. This size is a one time 'problem' when the user installs the app since it takes some time to download. There are a lot of improvements built into the Windows 10 Store to tackle this problem:
Incremental updates (only update part of your app that changed)
File single instancing (if the file/library is on your system, it isn't downloaded again)
Partial resource downloads (only the language and scaled assets that fit for the device)
More info, see e.g. this Build session.
If you want exact numbers, I would suggest building a small PoC with and without and compare in size (although size will change a tiny bit as well because you write different code pieces to make it work).
But the added size because of using EF Core won't be in the magnitude of 100s of MBs. The packages itself are EFCore.Sqlite 71kb, EFCore.Relational 475kb, EFCore 783kb and these even include multiple dlls, xml, ... so it's only a fraction of that. Together with a few more base packages (Caching, Logging, ...) and maybe a few extra .NET standard libraries that will be used that might not be used when using the basic API you'll have a few MB extra. In my opinion ignorable.
The things you SHOULD worry instead of the initial download time about are:
Application startup time.
Overall application performance.
Performance of the developer (and thus the cost of creating the app).
If you're choice is between the native sqlite3 APIs and EF Core, it would be an easy pick for me (thinking on point 3 mentioned above). Just bear in mind that there are still issues with EF Core and .NET Native (the optimizations done by the store or when you build in Release mode with .NET native toolchain enabled). If you want to publish to the store very soon and have a rather large database, you might have some troubles to get it crash free. If you can sideload the app, just build without the .NET Native toolchain.
Related
I was talking to a friend of mine who knows a lot about js and wasm.He told me the technology goes far beyond web, since it is basicly a way to run near native applications on devices without actually giving them access to the computer.
Which means that thrid party or untrusted code on a smartphone for instance cannot accidentally or intentional change other apps or parts of the system.
This seamed to me like the perfect conditions to build a plugin system for an application I am working on.
I asked him about it but he was unable to give me a clear answer.
So the question is, can I use webassembly outside of a webbrowser, with custom bindings to safely allow users to extend the functionality of my application (a special image viewer) without sacrificing too much speed? It seams it should work using libnode or something, but is there a problem I might run into?
I don't know how much you know about web assembly but it depends on what your plugins actually should do. If it basically handle Arrays and numeric data with not that match interacting with host applications then it might fit. But when you have heavy object handling then it will not fit at the moment. So for image processing it might be perfect match like it is used in some web examples. Also be aware that some web assembly targeting system or not suitable for none web targets as they generate also some javascript code to be used in browsers beside the generate wasm. Some wasm modules for example require that you call malloc and free for string handling other have functions like new and gc for the nearly the same.
I am about to start building an app that will be used across all platforms. I will using monotouch and monodriod so I can keep things in .net
I'm a little lazy so I want to be able to reuse as much code as possible.
Lets say I want to create an application that stores contact information. e.g. Name & Phone number
My application needs to be able to retrieve data from a web service and also store data locally.
The MVVM pattern looks like the way to go but im not sure my approach below is 100% correct
Is this correct?
A project that contains my models
A project that contains my views,local storage methods and also view models which I bind my views to. In this case there would be 3 different projects based on the 3 os's
A data access layer project that is used for binding to services and local data storage
Any suggestions would be great.
Thanks for your time
Not specifically answering your question, but here are some lazy pointers...
you can definitely reuse a lot of code across all 3 platforms (plus MonoWebOS?!)
reusing the code is pretty easy, but you'll need to maintain separate project files for every library on each platform (this can be a chore)
MVVM certainly works for WP7. It's not quite as well catered for in MonoTouch and MonoDroid
some of the main areas you'll need to code separately for each device are:
UI abstractions - each platform has their own idea of "tabs", "lists", "toasts", etc
network operations - the System.Net capabilities are slightly different on each
file IO
multitasking capabilities
device interaction (e.g. location, making calls etc)
interface abstraction and IoC (Ninject?) could help with all of these
The same unit tests should be able to run all 3 platforms?
Update - I can't believe I just stumbled across my own answer... :) In addition to this answer, you might want to look at MonoCross and MvvmCross - and no doubt plenty of other hybrid platforms on the way:
https://github.com/slodge/MvvmCross
http://monocross.net (MVC Rather then Mvvm)
Jonas Follesoe's cross platform development talk: Has to be the most comprehensive resource out there at the moment. He talks about how best to share code and resources, abstract out much of the UI and UX differences, shows viable reusable usage of MVVM across platforms and nice techniques for putting together an almost automated build. (yes, that includes a way for you to compile you monotouch stuff on Visual Studio)
Best of all he has a available source code for the finished product and for a number of the major component individually placed in its own workshop project and a 50 + page pdf detailing the steps to do so.FlightsNorway on github
IMO the only thing missing is how best to handle local data storage across all platforms. In which case I would direct you to Vici Cool Storage an ORM that can work with WP7, MonoTouch and (while not officially supported) MonoDroid.
*Disclaimer* The site documentation isn't the most updated but the source code is available. (Because documentation is Kriptonite to many a programmer)
I think the easiest way to write the code once and have it work on all three platforms will probably be a web-based application. Check out Untappd for example.
You can start by looking at Robert Kozak's MonoTouch MVVM framework. It's just a start though.
MonoTouch MVVM
I'm a novice iphone developer, and just completed my first iphone app.
After provisioning my iphone for development, I noticed that the app used way too much memory, and that several memory leaks that were issuing from the app accessing the sqlite database in the app caused the app to crash often. After running instruments, I have decided that CORE DATA sounds like a better way to go: of course I still need to use the sqlite database that I have.
so my question is this: how easy would it be to adapt an already existing app to run on core data? My app basically shows 17,000 different mapView annotations in small amounts by county, though my database is very simple, just one giant spreadsheet basically.
in the App delegate, my app opens the Sqlite database, puts some of the data into a locations object with four attributes, and then makes an array of those objects.
the first view controller lets the user decide which county ( one of twelve or so) the user wishes to view, and then the last view controller uses a loop to add the selected locations to a mapView.
How should I modify my app in the previous paragraph so that it uses CoreData? Can you point me to any resources that I can use to achieve this (preferably not the Dev Center's CoreData tutorial; overly confusing and more complicated than i need right now)? or do I have to make an entirely new project in xcode and start from scratch?
Apologies for the indirect pontification, but ...
First, I'd say your assertion, "After running instruments, I have decided that CORE DATA sounds like a better way to go..." is flawed. Finding bugs and performance issues in your app doesn't automatically mean a different approach is better - it may only mean you need to fix your memory leaks and/or adjust your approach based on what your profiling shows.
The fact you say you still need to use an existing SQLite database doesn't automatically preclude using Core Data, but it does make your end solution significantly more complicated. If you can possibly get away with using either Core Data or SQLite entirely, that would by far be your best bet.
Also, Core Data is not a beginner (or, I'd argue, even an intermediate) Cocoa technology. It requires significant prerequisite knowledge to do anything more than very basic stuff without becoming hopelessly lost when the inevitable problem arrises. If you're too pressed to take the time to read the documentation and research the technology for now, you're probably better off just fixing the problems with your existing solution.
... and there's nothing wrong with your existing solution (using SQLite directly) at its most basic. The better question(s) to ask is (are) about the specific problems you're having with your current approach.
That said, if you want to adapt an existing iOS-based solution to use Core Data, you'll likely have an easier time of it than if you were targeting Mac OS. Create a basic Core-Data-Based iPhone app project and look at the code. The code to build the Core Data stack is in plain sight. The only other thing to remember is to add an xcdatamodel file like the one found in the empty project. If you've gotten far enough to interface with the SQLite library, you should have enough experience to see clearly how Core Data is used in a standard iOS app.
The steps you need to go to convert to Core Data:
- Develop data model (maybe graphically in xcode)
- Adopt your code to use core data instead, i.e. rewrite that module
- Read your old data and put it into core data
If you have it already running with sqllite, I'd stick with it. If it uses much too many memory, it's probably a bug on your part, or bad partitioning of the data you load into memory, and you want to address this one way or the other - although Core Data makes it maybe a bit easier to get it right.
(Presumably the following question is not iPhone specific, aside from the fact that we would likely use a Framework or dynamic library otherwise.)
I am building a proprietary iPhone SDK for a client, to integrate with their web back-end. Since we don't want to distribute the source code to customers, we need to distribute the SDK as a static library. This all works fine, and I have verified that I can link new iPhone apps against the library and install them on the device.
My concern is around third party libraries that our SDK depends on. For example we are currently using HTTPRiot and Three20 (the exact libraries may change, but that's not the point). I am worried that this may result in conflicts if customers are also using any of these libraries (and perhaps even different versions) in their app.
What are the best practices around this? Is there some way to exclude the dependent libraries' symbols from our own static library (in which case customers would have to manually link to both our SDK as well as HTTPRiot and Three20)? Or is there some other established mechanism?
I'm trying to strike a balance between ease of use and flexibility / compatibility. Ideally customers would only have to drop our own SDK into their project and make a minimal number of build settings changes, but if it makes things more robust, it might make more sense to have customers link against multiple libraries individually. Or I suppose we could distribute multiple versions of the SDK, with and without third party dependencies, to cover both cases.
I hope my questions make sense... Coming mainly from a Ruby and Java background, I haven't had to deal with compiled libraries (in the traditional sense) for a long time... ;)
If it were me I would specify exactly which versions of those 3rd party libraries my library interoperates with. I would then test against them, document them, and probably deliver with those particular versions included in the release.
Two things I would worry about:
-I would want to be sure it 'just works' when my customers install it.
-I wouldn't want to guarantee support for arbitrary future versions of those 3rd party libraries.
It is fine to include a process for the customer to move to newer versions, but if anything doesn't work then I would expect the customer to pay for that development work as an enhancement, rather than it being a free bug fix (unless you include that in the original license/support arrangement).
At that point it becomes an issue of ensuring your specific versions of the 3rd party libraries can work happily alongside anything else the customer needs (in your case a web back-end). In my experience that is usually a function of the library, e.g. some aren't designed so multiple versions can run side-by-side.
At what level of complexity is it mandatory to switch to an existing framework for web development?
What measurement of complexity is practical for web development? Code length? Feature list? Database Size?
If you work on several different sites then by using a common framework across all of them you can spend time working on the code rather than trying to remember what is located where and why.
I'd always use a framework of some sort, even if it's your own, as the uniformity will help you structure your project. Unless it's a one page static HTML project.
There is no mandatory limit however.
I don't think there is a level of complexity that necessitates a framework. For me whenever I am writing a dynamic site I immediately consider a framework, and if it will save me time, I use it(it almost always does, and I almost always do).
Consider that the question may be faulty. Many of the most complex websites don't use any popular, preexisting, framework. Google has their own web server and their own custom way of doing things, as does Amazon, and probably lots of other sites.
If a framework makes your task easier, or provides added value, go for it. However, when you get that framework you are tied to a new dependancy. I'm starting to essentially recreate a Joel on Software post, so I will redirect you here for more on adding unneeded dependencies to your code:
http://www.joelonsoftware.com/articles/fog0000000007.html
All factors matter. You should measure how much time you can save using 3rd party framework and compare it to the risks of using other's code
Never "mandatory." Some problems are not well solved by any framework. It would be suggestible to switch to a framework when most of the code you are implementing has already be implemented by the framework in question in a way that suits your particular application. This saves you time, energy, and will most likely be more stable than the fresh code you would have written.
This is really two questions, you realize. :-) The answer to the first one is that it's never mandatory, but honestly, parsing HTML request parameters directly is pretty horrible right from the start. I don't want to do it even once, so I tend to go toward a framework relatively early on.
As far as what measurement is practical, well, what are you worried about? All of the descriptions that you list have value. Database size matters primarily for scaling, in my opinion (you can write a very simple app if you have a very simple schema, even if there are hundreds of thousands of rows in the database). The feature list will probably determine the number and complexity of UI pages, which will in turn help to dictate the code length.
There are frameworks that are there for getting moving very quickly with a simple blog, django or RoR all the way to enterprise full-stack applications Zope. Not to be tied to just the buzz world, you also have ASP.Net and J2EE, etc.
All frameworks and libraries are tools at your disposal. Determine which ones will make your life easier for your given project and use them.
I would say the reverse is true. At some point, your project gets so expansive, that you actually get slowed down by the shortcomings of the framework. For sufficiently large projects you may, in fact, be better off developing your own framework, to meet your own needs. I have seen many times where people were held back in the decisions they could make, or the work they could produce, because they were trying to do something that the framework didn't anticipate. And doing these things that the framework doesn't anticipate can be very troublesome. The nice thing about making your own framework, is that it can evolve with your project, to be a help to you system, instead of a hindrance.
So, to conclude, small projects should be use existing frameworks. Large projects should contain their own framework.