I have firestore data structure like this:
How to achieve to get ts_app_theme with name default only?
I already try query like this
SELECT * FROM ts_app_theme WHERE __key__ HAS ANCESTOR KEY(ts_app_config, '$merchant_key') AND __key__ = KEY(ts_app_theme, 'default')
it didn't work..
I can do this
SELECT * FROM ts_app_theme WHERE __key__ HAS ANCESTOR KEY(ts_app_config, '$merchant_key')
but will return another document in ts_app_theme if I have more document other than default later..
Thank you..
GQL is for Firestore in Datastore mode. You can find it in Google docs here and here. Datastore data model has kind which that might be compared to SQL table and entity which can be compared to SQL row. Due to this structure it's possible to use SQL like querying.
While Firestore in general, like on your screenshot, is NoSQL database. As of it concept it does not contain schema which is could be use in building SQL queries. BTW I wonder where you run those queries, possible there is some nice tool I don't know of.
Regarding querying Firestore I found nice tutorial series starting with this.
Related
I wonder, How do I change a live data schema with MongoDB ?
For example If I have "Users" collection with the following document:
var user = {
_id:123312,
name:"name",
age:12,
address:{
country:"",
city:"",
location:""
}
};
now, in a new version of my application, if I add a new property to "User" entity, let us say weight, tall or adult ( based on users year ), How to change all the current live data which does not have adult property. I read MapReduce and group aggregation command but, they seem to be comfortable and suitable for analytic operation or other calculations, or I am wrong.
So what is the best way to change your current running data schema in MongoDB ?
It really depends upon your programming language. MongoDB is really good at having a dynamic schema. I think your pattern of thought at the moment is too SQL related whereby you believe that all rows, even if they do not yet have a value, must have the new field.
The reality is quite different. The rows which have nothing meaningful to put into them do not require the field and you can, in your application, just check to see if the returned document has a value, if not then you can assume, as in a fixed SQL schema, that the value is null.
So this is one aspect where MongoDB shines, is the fact that you don't have to apply that new field to the entire schema on demand, instead you can lazy fill it as data is entered by the user.
So just code the field into your application and let the user do the work for you.
The best way to add this field is to write a loop, in maybe the console close or on the primary of your replica (if you have one, otherwise just on the server), like so:
db.users.find().forEach(function(doc){
doc.weight = '44 stone';
db.users.save(doc);
});
That is currently the best way to do something like what your asking.
I'm trying to implement a search functionnality with autocomplete in a project I'm working on. So far I've managed to do this with a select column1, column2 where myColumn like %...% but it isn't as responsive is I would like, I mean it's just ok and it searches only in one single row. The current version of MySql with innoDB tables doesn't support "match against" any plans on upgrading the db version? Otherwise, could anyone suggest another way of achieving a search + autocomp (against a single table).
Thanks!
Try www.rockitsearch.com , it has the autocomplete implementation. The only thing you'll need to do is :
- create an account
- export your data
I'm interested in using the following audit mechanism in an existing PostgreSQL database.
http://wiki.postgresql.org/wiki/Audit_trigger
but, would like (if possible) to make one modification. I would also like to log the primary_key's value where it could be queried later. So, I would like to add a field named something like "record_id" to the "logged_actions" table. The problem is that every table in the existing database has a different primary key fieldname. The good news is that the database has a very consistent naming convention. It's always, _id. So, if a table was named "employee", the primary key is "employee_id".
Is there anyway to do this? basically, I need something like OLD.FieldByName(x) or OLD[x] to get value out of the id field to put into the record_id field in the new audit record.
I do understand that I could just create a separate, custom trigger for each table that I want to keep track of, but it would be nice to have it be generic.
edit: I also understand that the key value does get logged in either the old/new data fields. But, what I would like would be to make querying for the history easier and more efficient. In other words,
select * from audit.logged_actions where table_name = 'xxxx' and record_id = 12345;
another edit: I'm using PostgreSQL 9.1
Thanks!
You didn't mention your version of PostgreSQL, which is very important when writing answers to questions like this.
If you're running PostgreSQL 9.0 or newer (or able to upgrade) you can use this approach as documented by Pavel:
http://okbob.blogspot.com/2009/10/dynamic-access-to-record-fields-in.html
In general, what you want is to reference a dynamically named field in a record-typed PL/PgSQL variable like 'NEW' or 'OLD'. This has historically been annoyingly hard, and is still awkward but is at least possible in 9.0.
Your other alternative - which may be simpler - is to write your audit triggers in plperlu, where dynamic field references are trivial.
How can I write a query with ormlite instead of using .create or any other thing like that? Can you please show me how for this simple example :
SELECT name FROM client
EDIT since I can't answer myself :
I guess I had to search a little more , anyway I found how to do it with the QueryBuilder like this :
newDao.query(newDao.queryBuilder().where.eq("name",valueofname)
If someone knows how to write the full query that would be great , otherwise , I'll stick with this solution
How can I write a query with ormlite instead of using .create or any other thing like that?
Goodness, there are tons of documentation about how to do this on the ORMLite site. Here's the section on the query builder.
I'm not sure what you mean by "full query" but your example will work with some tweaks:
List<...> results = newDao.queryBuilder().where().eq("name",valueofname).query();
It does not make sense to just return the name since the Dao hierarchy is designed to return the specific Client object. If you just want the name the you can specify the name column only to return:
... clientDao.queryBuilder().selectColumns("name").where()...
That will return a list of Client objects with just the name field (and the id field if it exists) extracted from the database.
If you just want the name strings then you can use the RawResults feature.
I'm newbie in ZF and have some stupid question:
What's the best solution to calculate rows in the table if I work with inherited object of Zend_Db_Table_Abstract class?
For my first web application I use QuickStart tutorial (link text) so if I want to calculate count of rows in the table in controller the simplest solution will be something like that:
$guestbooks = new Default_Model_GuestBook();
$count = count($guestbooks->fetchAll());
But I don't think that fetchAll() is the best solution just to calculate rows in the table because GuestBook table can be really huge. May be it is possible to use something much more easy and simple?
I found in manual that it is possible to work direct with DB Adapter (like $db->query("SELECT COUNT(*) FROM GuestBook");), but in QuickStart tutorial I haven't got that object in controller and I really don't want to create it only for one simple action.
Will be waiting for suggestions!
Thanks
Your model already contains DB Adapter because it also works with DB. You can get access to DB Adapter using getAdapter() method.
$questbooks->getAdapter()->query("SELECT COUNT(*) FROM GuestBook");