"Keep Data on Update" behaviour does not work in the Smartface - smartface.io

Whether I set the "Keep Data on Update" property true or false, it never affect.
If what I understand from this property is that it provides data on tables will be protected if it was set true, it does not work.
No matter what I did, all the data are deleted when I run the project.
How can I always save all the data on the database?

keep data on Update is used for protecting the databases when you are updating your application whether you change the data structure or not. ( ex. 1.0.0 -> 1.0.1 )
For more information please check the link below;
http://www.smartface.io/developer/guides/data-network/table/

Related

What is "reload()" for in MongoEngine

I have a statement like: jsonify_ok(data=mytag.reload().as_dict())
What role does reload() play? what's the normal situation for us to use reload()?
Document.reload() will check the database and update your data (I think in this case mytag but I can't see what this is) with any attributes that have been modified.
This could be useful if the data could or has changed before calling jsonify_ok.
Breaking down your data=mytag.reload() this says: "For document mytag, go to the database and fetch the latest version of this document, assigning this to variable data"
Relevant documentation link

SpagoBI + Firebird DataSource (The result set is closed)

I am using Spagobi version 3.6.0, Jaybird-2.2.2JDK_1.7 and Firebird 2.5 (x64). I set up a datasource and the testing is OK.
I set up a dataset and the preview shows the correct list of colunms, only there is no data. Access via some other SQL viewer shows the data.
The error message in the Catalina log is:
org.firebirdsql.jdbc.FBSQLException: The result set is closed
Does anybody have an idea what I did wrong?
After some testing the solution to your problem is to specify the connection property defaultHoldable=true in the connection URL of the datasource, so for example:
jdbc:firebirdsql://localhost/database?defaultHoldable=true
As commented earlier you also need to upgrade to Jaybird 2.2.7, otherwise you will be confronted with bugs JDBC-304 and/or JDBC-305.
I haven't checked the code of SpagoBI, but it looks like SpagoBI assumes that result sets are always holdable over commit and executes its queries using auto commit. It should either not use auto commit, or check the DatabaseMetaData.getResultSetHoldability() and/or Connection.getHoldability() and explicitly request holdable result sets.

memcached sometimes holding corrupt data

I have been using Memcached (AWS Elasticache) for a while now.
Just today I ran into a situation that I hadn't experienced before. Regularly there is a call to the database for a list of countries and I store this in memcached. This time however the data wasn't stored correctly (I'm not sure why as it has worked fine for months) but after looking over the code & trying code based fixes (assuming something was wrong with the site code) a bounce of the cache fixed the issue. Note: I had bounced memcached the day before so maybe it didn't warm up correctly etc.
My Question is - currently I check to see if the memcached key exists and if it does I use the data. Only if the memcached key doesn't exist do I query the database and populate the key. Do I also need to validate the data somehow to so I can be sure its not corrupt or should this be seen as an infrequent issue (which it is) and left at that.
Also I believe the memcached key didn't have any data in it so maybe just checking if the key is empty is good enough...
Code below:
public $countryList = array();
// Countries, Country Code, Zip Enabled --- 'generic::countryList::'.$_SESSION['language']'
public function countryList() {
$elasticache = new elasticache();
if(!$this->countryList = $elasticache->memcached->get('generic::countryList::'.$_SESSION['language'])) {
--- this is where the database query code is
$elasticache->memcached->set('generic::countryList::'.$_SESSION['language'], $this->countryList, 2592000);
}
}
I guess confirming the data in the key is correct would required a database call and therefore would defeat the purpose of memcached....
thoughts & ideas?

How to properly handle mongoose schema migrations?

I'm completely new to MongoDB & Mongoose and can't seem to find an answer as to how to handle migrations when a schema changes.
I'm used to running migration SQL scripts that alter table structure and any underlying data that needs to be changed. This typically involves DB downtime.
How is this typically handled within MongoDB/Mongoose? Any gotcha's that I need to be aware of?
In coming across this and reasonably understanding how migrations work on a relational database, MongoDB makes this a little simpler. I've come to 2 ways to break this down. The things to consider when dealing with data migrations in MongoDB (not all that uncommon from RDBs) are:
Ensuring local test environments do not break when a developer merges the latest from the project repository
Ensuring any data is correctly updated on the live version regardless if a user is logged in or out if authentication is used. (Of course if everyone is automatically logged out when an upgrade is made, then only worrying about when a user logs in is necessary).
1) If your change will log everyone out or application downtime is expected then the simple way to do this is have a migration script to connect to local or live MongoDB and upgrade the correct data. Example where a user's name is changed from a single string to an object with given and family name (very basic of course and would need to be put into a script to run for all developers):
Using the CLI:
mongod
use myDatabase
db.myUsers.find().forEach( function(user){
var curName = user.name.split(' '); //need some more checks..
user.name = {given: curName[0], family: curName[1]};
db.myUsers.save( user );
})
2) You want the application to migrate the schemas up and down based on the application version they are running. This will obviously be less of a burden for a live server and not require down time due to only upgrading users when they use the upgraded / downgraded versions for the first time.
If your using middleware in Expressjs for Nodejs:
Set an app variable in your root app script via app.set('schemaVersion', 1) which will be used later to compare to the users schema version.
Now ensure all the user schemas have a schemaVersion property as well so we can detect a change between the application schema version and the current MongoDB schemas for THAT PARTICULAR USER only.
Next we need to create simple middleware to detect the config and user version
app.use( function( req, res, next ){
//If were not on an authenticated route
if( ! req.user ){
next();
return;
}
//retrieving the user info will be server dependent
if( req.user.schemaVersion === app.get('schemaVersion')){
next();
return;
}
//handle upgrade if user version is less than app version
//handle downgrade if user version is greater than app version
//save the user version to your session / auth token / MongoDB where necessary
})
For the upgrade / downgrade I would make simple js files under a migrations directory with an upgrade / downgrade export functions that will accept the user model and run the migration changes on that particular user in the MongoDB. Lastly ensure the users version is updated in your MongoDB so they don't run the changes again unless they move to a different version again.
If you're used to SQL type migrations or Rails-like migrations then you'll find my cli tool migrate-mongoose the right fit for you.
It allows you to write migrations with an up and a down function and manages the state for you based on success and failure of your migrations.
It also supports ES6 if you're using ES 2015 syntax.
You get access to your mongoose models via the this object, making it easy to make the changes you need to your models and schemas.
There are 2 types of migrations:
Offline: Will require you to take your service down for maintenance, then iterate over the entire collection and make the changes you need.
Online: Does not require to take your service down for maintenance. When you read the document, you check its version, and run a version specific migration routine for each version between the old and the new. Then you load the resulting thing.
Not all services can afford an offline migration, I recommend the online approach.

Breeze column based security

I have a "web forms", "database first enitity" project using Breeze. I have a "People" table that include sensitive data (e.g. SSN#). At the moment I have an IQueryable web api for GetPeople.
The current page I'm working on is a "Manage people" screen, but it is not meant for editing or viewing of SSN#'s. I think I know how to use the BeforeSaveEntity to make sure that the user won't be able to save SSN changes, but is there any way to not pass the SSN#s to the client?
Note: I'd prefer to use only one EDMX file. Right now the only way I can see to accomplish this is to have a "View" in the database for each set of data I want to pass to the client that is not an exact match of the table.
You can also use JSON.NET serialization attributes to suppress serialization of the SSN from the server to the client. See the JSON.NET documention on serialization attributes.
Separate your tables. (For now, this is the only solution that comes to mind.)
Put your SSN data in another table with a related key (1 to 1 relation) and the problem will be solved. (Just handle your save in case you need it.)
If you are using Breeze it will work, because you have almost no control on Breeze API interaction after the user logs in, so it is safer to separate your data. (Breeze is usually great, but in this case it's harmful.)