I'm working through the issues of using an offline data sync with a mobile back end, and have worked through numerous articles/documentation regarding it's use.
The scenario that I'm looking for is a bit different that the normal demo projects. We have a Azure SQL db that we have been using for some time now, and a use case has come up to use this data in a mobile back end.
I have yet to see an article on how best practises of using an existing sql DB (EF codefirst), with offline data sync tables that need to inherit from EntityData, which adds additional fields for the sync process.
My original thought was to use the ef code first definition of the data models, and use them with Mobile App... however the requirement for EntityData doesn't work, as I'm not going to add those fields to a production system.
My question is, what is the best practise of using data from a production system, and get them syncing with mobile back ends? I'm thinking a intermidary DB is required, but that just means that there are three databases/tables to sync - which doesn't feel right at this point in time.
Anybody know of an article that starts with an Azure DB, and the process/decisions they made to allow data sync to offline devices?
This scenario is discussed in detail in my book - http://aka.ms/zumobook - chapter 3.
Related
I'm writing a very small REST API using Dart/Aqueduct hosted on Heroku utilizing PostgreSQL.
When communicating with this API, I need to fetch all data and store it in an application locally. The application will on reboot ask the API if any data has changed in any databases, which will only get modified through this API.
My question is: How do I check, whether data has been modified? Storing it in the Aqueduct channel is not viable, as Heroku will boot servers up as needed (and multiple at the same time) changing this modified date time each time.
PostgreSQL cannot supply this modified date time (https://dba.stackexchange.com/questions/58214/getting-last-modification-date-of-a-postgresql-database-table) - so what do I do? Is the only way to do this to have a seperate table storing this information? Can this be done more lightweight, so I wouldn't have to make a query to the databases when e.g. calling https://my-api.com/lastModified? Should I serve a static file, which should be written to on each data modification?
Maybe there exist a smart, lightweight solution!
Cheers :)
Yes, use a separate table to store the date - the ORM is already doing this to track migrations. I’m not sure what light weight means in this context, but you won’t run into any problems with this approach, whereas a file or in-memory storage won’t work at all.
I need to connect Salesforce to an external database we have, and constantly keep both the database and salesforce updated in as close to real time as we can get. I have tired Google searching possible solutions, but nearly all of them have been outdated by over a year. Any ideas?
Thank You!
Depending on your exact scenario it is quite difficult to give you a proper answer.
However off the top of my head I would suggest two Salesforce products.
Salesforce Connect
https://www.salesforce.com/products/platform/products/salesforce-connect/
Salesforce Connect allows you to connect to various data sources and turn the tables / objects of that data source into a SObject. For example MySQL, Microsoft SQL Server, Oracle etc. There are limitations and thus it would be better to talk to a Certified Architect about such an implementation.
Heroku Connect
https://www.heroku.com/connect
Heroku Connect allows you to connect a Heroku data source with a Salesforce Object. The sync is not immediate but there are quite a few customisations inside the product to make the sync as "live" as possible. There are limitations and thus it would be better to talk to a Certified Architect about such an implementation.
Salesforce Connect has limitations.. It's good for presenting data via the interface, but if you need to act on the data and report on the data it might not be the best bet.
For close to real time hand coded sync, look at the streaming API, or using Salesforce Platform Events.
If you want to use an ETL tool, my organization has had decent luck with DBAmp, which is a Sql add on product and fairly inexpensive as compared to a lot of ETL tools ($1625 annually.) http://www.forceamp.com/ We're able to replicate the entire SF database offline in SQL with DBAMP, push changes to the offline Sql copy and upsert changes. It's also a good backup solution via offline full data copy. We got very good support from them as well when we encountered challenges.
Hope this helps.
Not sure if you are syncing one object or multiple objects but there are a few options that you have.
You can try the salesforce provided features Salesforce Connect which allows you to view and update data from your external source In salesforce but there are limitations with reporting and other considerations you should consider.
If you make use of Heroku, Heroku Connect is your best bet
You can also use a middleware ESB solutions like MuleSoft which can orchestrate keeping data in sync across multiple data sources and do batch loads, but depending on how often changes you want to keep an eye out for api limits for inbound calls to salesforce.
You can roll your own solution where you can use Outbound Messages in workflow (or triggers that initiates an apex class that calls out, but that is more cumbersome and you have to do custom error handling and retry logic which you get for free using outbound messages) to send changes from salesforce to your homegrown service that writes to you database and have you homegrown solution write back to salesforce using the soap or rest api. That would probably take you some time to build. You would also still need to be aware of API limits depending on how many updates are made on the non salesforce side.
You crate a Canvas App which displays data from your DB in Salesforce as a Tab and hook it up via SSO so users are auto logged in. But again there would not be reporting, or any salesforce features that you can take advantage of.
But I really think that you should spend some time to determine what system is your source of truth because that would determine how the data should be synced. You should also investigate if you really need the sync to be realtime or near realtime, or if you can manage with something like an hourly true up on the system that is not the source of truth.
I would like to give a web app with a PostgreSQL database 100% offline functionality. In an ideal case the database should be completely replicated in the browser per user, and synchronized when online. So that the same code can be used to talk to both the offline and online database. I know this is possible with PouchDB and CouchDB, but have not found a solution that works with PostgreSQL. Is this at all possible?
Short answer: I don't know of anything like this that currently exists.
However, in theory, this could be made to work...(long answer:)
Write a PostgreSQL backend for levelup (one exists for MySQL: https://github.com/kesla/mysqldown)
Wire up pouch-server to read/write from your PostgreSQL db using pouchdb's existing leveldb adapter (which in turn will have to be configured to use your postgres backend). Congrats, you can now sync data using PouchDB!
Whether an approach like this is practical in reality for your application is a different question you'll have to answer.
You may be wondering, for example, "will I be able to sync an existing complex schema with multiple tables to the client with this approach?" The answer is probably not - the mysqldown implementation of leveldown uses a single MySQL table with three fields: id, key, and value (source), and I imagine any general-purpose PostgreSQL adapter would be similar (nothing says you can't do a special-purpose adapter just for your app though!).
On the other hand, if you were to implement a couchdb-compatible API (or a subset- you may not need attachments, for example) over your existing database schema, there's nothing stopping you from using PouchDB on the client to talk directly to that as if it were an actual CouchDB - just pop in the URL and call replicate()! Implementing the replication protocol might be a fair bit of work, since you'd need to track revisions and so on somewhere - but again, technically not impossible!
There are also implementations of levelup's backend storage that are designed for browsers. See level.js, which could be another way to sync between a server-side Postgres levelup backend and the browser.
TL;DR: There's tons of work being done around Javascript databases right now. Is syncing with Postgres impossible? probably not. Would it be a lot of work? Definitely. Worth it? Who knows, but it would be cool.
Without installing PostgreSQL on the client? No. Obviously you can cache data for offline use, but an entire RDBMS+procedural languages in Javscript, no.
I am new to iPhone app development and have a question about storing data. I've spent quite sometime learning about core data but still confused about the concept of persistence store.
What I understand is that core data is just a way of managing the data you downloaded from an external database. But given that core data is backed by SQLite, does that mean there exists a SQLite db in-memory while running? If so, does that mean when I use core data it will be more effective if I download a huge data set at start? But what about apps such as twitter or Facebook that require constant update of data, is a straight $NSURLConnection$ sufficient in these cases? If core data is used, will the extra overheads (i.e. data objects) be of any burden for such frequent request of update?
I would also like to find out some common ways of setting up an online database for iPhone app? Is it usually MySQL servers with a homemade Python wrapper that translates the data into JSON? Any standard server provider would provide the whole package? Or open source code?
Many many thanks!
I'm going to go through and try to address each of your questions, let me know if I missed one!
Firstly, Core Data can be used to store information generated in your app as well, there is nothing keeping you from using it in one way or another.
The way I understand it working is that the file, or other storage mechanism Core Data uses, exists regardless of whether or not your app is running. For a user to have to wait for a large database to be downloaded and loaded into a local database without being able to interact with your application is not the best way to do it in my opinion, people react negatively unresponsive UI. When a user may run your app for the first time, its possible you may need to get a larger set of data, but if any of it is generic and can be preloaded that is ideal, the rest should be downloaded as the user attempts to access it.
Facebook and Twitter applications work just as you understand in that a connection is established and the information is pulled from the appropriate site, the only thing they store is profile information, as far as I know. I would hesitate to use Core Data to store peoples information as eventually yes, there would be a significant amount of overhead caused by having to store peoples news feed or messages going back months on end.
As for setting up on online database that is something I'm unfamiliar with, so hopefully someone else can provide some insight on that, or if I find something I think may be of use, I will post back here for you. This part may actually merit its own separate question.
Let me know if you need to me elaborate on anything!
I have developed a number of departmental client-server applications, and am now ready to begin working on moving one of these applications to a SaaS model. I have done some basic web development, but I'm a newbie when it comes to SaaS architectures.
One of the first questions that comes to mind as I try to design the architecture is the question of single vs. multi tenancy. The pros and cons of each vary significantly depending on the type of application and scale required, so I'd like to describe my application and scale needs below, and hope others can comment on how I should get started with the architecture.
The client-server application currently consists of a Firebird database and a Windows application. The database contains about 20 tables containing a few thousand records in 4 primary tables, and a few hundred records in various lookup and related tables. Although the number of records is small, the size can get large, as the database can contain large BLOBS. Each customer sets up their own database and has a handful of users within the organization connected to it. When I update the db schema, a new windows application is released, and it checks the db schema and then applies the updates as needed.
For the SaaS application, I am designing for 100's (not 1000's or millions) of new customers per year. My first thought was to go with a multi tenancy model to make updates easy (shut down apply the updates to one database, and then start up). On the other hand, a single tenancy model would provide a means to roll updates out to a group of customers at a time, and spread the risk of data corruption - i.e. if something goes wrong with a database, it will impact one customer instead of all customers. With this idea, I was thinking of having a single web front-end which would connect to a single customer database upon login. Thus, when a new customer creates an account, a new database would be created (each customer would have their own db with multiple users as needed for the customer).
In this model, a db update would require either a process to go through each db to apply schema changes, or a trigger upon logging in to initiate a schema update similar to the client-server model currently in use.
Can anyone point me to information for similar applications which have been ported from client-server to SaaS? Or provide any pointers to consider? Basically I'm looking for architecture examples of taking a departmental application and making it available as a self service website for multiple customers. Thanks for any suggestions, resources, etc.
Good questions.
One thing that comes to mind is that if you have multiple databases which you roll out in a staged manner to reduce the likelihood of breaking all of your customers, you will have to address the issue of what to do if the db structure changes. You will either have to be very rigorous with respect to maintaining backward compatibility, or else deploy separate versions of your code base and somehow manage which tenants are associated with which databases.
We are providing our application using a SaaS model as well.
It was, initially a Windows app which worked similar to your multiple database proposal. Upon login, the win app would authenticate against a single "licensee" database which would then respond with connection information for a database specific to that licensee. The nice thing about this was that it provided 1) physical separation of licensee data, which our customers liked and 2) enabled us to physically locate the database on a server geographically closer to the users which both improves performance and avoids some potentially tricky legal and regulatory issues with respect to providing data across country boundaries.
Of course, since the app was a thick client app, we could get away with making database changes and pushing them out to one licensee at a time. When we were ready to upgrade, we could push out an updated thick client in conjunction with the new database - thereby ensuring that the codebase was a match with the database. As long as the common "licensee" authentication database stayed consistent, this worked fairly well.
On the other hand, though, this solution brought with it all of the problems of maintaining and managing a thick client approach which finally lead us down the thing client, browser-based approach.
In our new model, everything is in a single database. When we have updates, we push both the code and the db out at the same time. This solves the problem of keeping the code base consistent with the database structure. However, we are now confronted with the issues mentioned in #s 1 and 2, above, which we have yet to resolve.
I hope this provides some food for thought for you.
I, too, am interested in this question.
Thanks for the post.
-S