I'm using the Zend Framework database relationships for a couple of weeks now. My first impression is pretty good, but I do have a question related to inserting related data into multiple tables. For a little test application I've related two tables with each other by using a fuse table.
+---------------+ +---------------+ +---------------+
| Pages | | Fuse | | Prints |
+---------------+ +---------------+ +---------------+
| pageid | | fuseid | | printid |
| page_active | | fuse_page | | print_title |
| page_author | | fuse_print | | print_content |
| page_created | | fuse_locale | | ... |
| ... | | ... | +---------------+
+---------------+ +---------------+
Above is an example of my DB architecture
Now, my problem is how to insert related data to two separate tables and insert the two newly created ID's into the fuse table at the same time. If someone could could maybe explain or give me a topic related tutorial. I would appreciate it!
I assume you got separate models for each table. Then simply insert stuff in Prints table, store returned ID in variable. Then insert stuff in Pages table and store returned ID in another varialble. Eventually insert data in your Fuse table. You do not need any "at the same time" (atomic) operation here. ID of newly inserted rows are returned by save() (I assume you use autoincrement fields for this).
$printsModel = new Application_Model_Prints();
$pagesModel = new Application_Model_Pages();
$fuseModel = new Application_Model_Fuse();
$printData = array('print_title'=>'foo',
...);
$printId = $printsModel->insert( $printData );
$pagesData = array('page_author'=>'bar',
...);
$pageId = $pagesModel->insert($pagesData);
$fuseData = array('fuse_page' => $pageId,
'fuse_print' => $printId,
...);
$fuseId = $fuseModel->insert($fuseData);
thus is pseudo code, so you may want to move inserts into your models and do somoe i.e. normalisation etc.
I also suggest paying more attention to fields naming convention. It usually helps and now you got fuseid but also fuse_page. So it either should be fuse_id or fusepage (not to mention I suspect this field stores id so it would be fuse_page_id or fusepageid).
Prints and Pages are two entities . Create row clases for each
class Model_Page extends Zend_Db_Table_Row_Abastract
{
public function addPrint($print)
{
$fuseTb = new Table_Fuse();
$fuse = $fuseTb->createRow();
$fuse->fuse_page = $this->pageid;
$fuse->fuse_print = $print->printid;
$fuse->save();
return $fuse;
}
}
Now when you create page
$page = $pageTb->createRow() ; //instance of Model_Page is returned
$page->addPrint($printTb->find(1)->current());
Related
+---+---------+-----------+
|id | title |description|
+---+---------------------+
| 1 | The King| Jonh X |
+---+---------------------+
Two concurrent statements:
update book set title = 'aaa', description = 'aaa' where id = 1
update book set title = 'bbb', description = 'bbb' where id = 1
Is it theoretically possible the following result?
+---+---------+-----------+
|id | title |description|
+---+---------------------+
| 1 | aaa | bbbb |
+---+---------------------+
update book set title = 'aaa', description = 'aaa' where id = 1
select title, description from book -> (The King, aaa)?
Those statements are not wrapped in transaction
What about popular database systems like SQL Server, Postgres?
Generally impossible in any ACID-compliant database.
ACID stands for atomicity, consistency, isolation, durability.
In particular, Postgres takes a write-lock on affected rows before the UPDATE and does not release it until the end of the transaction. (And every UPDATE runs inside a transaction, implicitly or explicitly.) Concurrent transactions trying to write to the same row must wait and re-evaluate filters once the lock is released. They may then change the row once more - or come up empty if the filters do not apply any more.
I have celery setup and working together with django. I have some periodic tasks that run. The celery log shows that the tasks are executed and that they return something.
[2017-03-26 14:34:27,039: INFO/MainProcess] Received task: my_webapp.apps.events.tasks.clean_outdated[87994396-04f7-452b-a964-f6bdd07785e0]
[2017-03-26 14:34:28,328: INFO/PoolWorker-1] Task my_webapp.apps.events.tasks.clean_outdated[87994396-04f7-452b-a964-f6bdd07785e0] succeeded in 0.05246314400005758s: 'Removed 56 event(s)
| Removed 4 SGW(s)
'
But the result is not showing up on django-celery-results admin page.
These are my settings:
CELERY_BROKER_URL = os.environ.get('BROKER_URL')
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Europe/Stockholm'
CELERY_RESULT_BACKEND = 'django-cache'
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
CELERY_RESULT_DB_SHORT_LIVED_SESSIONS = True # Fix for low traffic sites like this one
I have also tried setting CELERY_RESULT_BACKEND = 'django-db'. I know the migrations are made (when using those settings), the table exists in the database:
my_webapp=> \dt
List of relations
Schema | Name | Type | Owner
--------+--------------------------------------+-------+----------------
...
public | django_celery_beat_crontabschedule | table | my_webapp
public | django_celery_beat_intervalschedule | table | my_webapp
public | django_celery_beat_periodictask | table | my_webapp
public | django_celery_beat_periodictasks | table | my_webapp
public | django_celery_results_taskresult | table | my_webapp
...
(26 rows)
Google won't give me much help, most answers is about old libraries like djcelery. Any idea how the get the results in the table?
In Vapor, we can create many-to-many relationships by creating a Pivot<U, T> object, where U and T are the models that we want to link together. So if I want to create a system where Users can have many Files and many Files can belong to many Users, I'd associate them like this:
var alice = User(name: "Alice")
try! alice.save()
var sales = File(name: "sales.xclx")
try! sales.save()
var pivot = Pivot<User, File>(alice, sales)
try! pivot.save()
What I can't figure out for the life of me is how would I make a Pivot<User, File> contain additional information? For example, I'd like to know when was this file associated associated to Alice, or what permissions she has over it.
On a Relational database, Fluent creates this table for the Pivot<User, File> type.
+---------+---------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------+---------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| file_id | int(11) | NO | | NULL | |
| user_id | int(11) | NO | | NULL | |
+---------+---------+------+-----+---------+----------------+
But I'd like the ability to represent something like this:
+---------+---------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------+---------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| file_id | int(11) | NO | | NULL | |
| user_id | int(11) | NO | | NULL | |
| date | date | NO | | NULL | |
| perms | varchar | NO | | READ | |
+---------+---------+------+-----+---------+----------------+
The Pivot<U, T> object can be thought of as the "bare minimum" required fields for a pivoted relation like siblings.
If you want to add custom fields to this table, you can create your own class to act as the pivot as long as it has the required elements:
Table name for Foo and Bar is bar_foo (lowercase, alphabetically ordered)
There exists at least the three columns: id, bar_id, foo_id
In other words, the table created by your pivot class must have at least the elements a Pivot<Foo, Bar> preparation would have created.
With this done, you can create new pivot relations by creating and saving instances of your pivot class.
When .siblings() relations are called on your models that use this pivot table, the default Pivot<U, T> will still be created to perform the fetch. But, this won't create any issues since the required fields are present on the pivot table.
so after having the same problem described by Andy and asking for a solution on the Vapor Slack I was redirected here.
My implementation (using PostgreSQL) of the solution proposed by Tanner can be found here
The key is the Rating model:
it’s a plain Model subclass
it has an entity name of movie_user (as described by Tanner the names of the relating models in alphabetical order)
it has the fields userId (mapping to "user_id") and movieId (mapping to "movie_id"), both are of type Node.
in prepare(Database) it again uses the name "movie_user" and defines the Id fields as Ints.
With that set up you can define the following relationship convenience methods:
On Movie: all raters
extension Movie {
func raters() throws -> Siblings<User> {
return try siblings()
}
}
On User: all rated movies
extension User {
func ratedMovies() throws -> Siblings<Movie> {
return try siblings()
}
}
A new rating for a movie (for a user) can be added like this:
ratings.post() { request in
var rating = try Rating(node: request.json)
try rating.save()
return rating
}
As Rating is a Model subclass, we can create it directly from the requests JSON. This requires the client to send a JSON document that conforms to the node structure of the Rating class:
{
"user_id": <the rating users id>,
"movie_id": <the id of the movie to be rated>,
"stars": <1-5 or whatever makes sense for your usecase>
}
To get all actual Ratings for a given movie, you seem to have to resolve the relationship manually (at least I think so, maybe somebody can give me a hint on how to do this better):
let ratings = try Rating.query().filter("movie_id", movieId).all()
Also, there seems to be no way of somehow calculating an average on the database right now. I would have loved something like this to work:
// this is now how it works
let averageRating = try Rating.query().filter("movie_id", movieId).average("stars")
So I hope this helps anybody coming across this problem. And thanks to all the wonderful people who contribute to the Vapor project!
Thx to #WERUreo for pointing out that the part where a rating is created was missing.
I'm currently attempting to modify an existing API that interacts with a postgres database. Long story short, it's essentially stores descriptors/metadata to determine where an actual 'asset' (typically this is a file of some sort) is storing on the server's hard disk.
Currently, its possible to 'tag' these 'assets' with any number of undefined key-value pairs (i.e. uploadedBy, addedOn, assetType, etc.) These tags are stored in a separate table with a structure similar to the following:
+---------------+----------------+-------------+
|assetid (text) | tagid(integer) | value(text) |
|---------------+----------------+-------------|
|someStringValue| 1234 | someValue |
|---------------+----------------+-------------|
|aDiffStringKey | 1235 | a username |
|---------------+----------------+-------------|
|aDiffStrKey | 1236 | Nov 5, 1605 |
+---------------+----------------+-------------+
assetid and tagid are foreign keys from other tables. Think of the assetid representing a file and the tagid/value pair is a map of descriptors.
Right now, the API (which is in Java) creates all these key-value pairs as a Map object. This includes things like timestamps/dates. What we'd like to do is to somehow be able to store different types of data for the value in the key-value pair. Or at least, storing it differently within the database, so that if we needed to, we could run queries checking date-ranges and the like on these tags. However, if they're stored as text items in the db, then we'd have to a.) Know that this is actually a date/time/timestamp item, and b.) convert into something that we could actually run such a query on.
There is only 1 idea I could think of thus far, without complete changing changing the layout of the db too much.
It is to expand the assettag table (shown above) to have additional columns for various types (numeric, text, timestamp), allow them to be null, and then on insert, checking the corresponding 'key' to figure out what type of data it really is. However, I can see a lot of problems with that sort of implementation.
Can any PostgreSQL-Ninjas out there offer a suggestion on how to approach this problem? I'm only recently getting thrown back into the deep-end of database interactions, so I admit I'm a bit rusty.
You've basically got two choices:
Option 1: A sparse table
Have one column for each data type, but only use the column that matches that data type you want to store. Of course this leads to most columns being null - a waste of space, but the purists like it because of the strong typing. It's a bit clunky having to check each column for null to figure out which datatype applies. Also, too bad if you actually want to store a null - then you must chose a specific value that "means null" - more clunkiness.
Option 2: Two columns - one for content, one for type
Everything can be expressed as text, so have a text column for the value, and another column (int or text) for the type, so your app code can restore the correct value in the correct type object. Good things are you don't have lots of nulls, but importantly you can easily extend the types to something beyond SQL data types to application classes by storing their value as json and their type as the class name.
I have used option 2 several times in my career and it was always very successful.
Another option, depending on what your doing, could be to just have one value column but store some json around the value...
This could look something like:
{
"type": "datetime",
"value": "2019-05-31 13:51:36"
}
That could even go a step further, using a Json or XML column.
I'm not in any way PostgreSQL ninja, but I think that instead of two columns (one for name and one for type) you could look at hstore data type:
data type for storing sets of key/value pairs within a single
PostgreSQL value. This can be useful in various scenarios, such as
rows with many attributes that are rarely examined, or semi-structured
data. Keys and values are simply text strings.
Of course, you have to check how date/timestamps converting into and from this type and see if it good for you.
You can use 2 different technics:
if you have floating type for every tagid
Define table and ID for every tagid-assetid combination and actual data tables:
maintable:
+---------------+----------------+-----------------+---------------+
|assetid (text) | tagid(integer) | tablename(text) | table_id(int) |
|---------------+----------------+-----------------+---------------|
|someStringValue| 1234 | tablebool | 123 |
|---------------+----------------+-----------------+---------------|
|aDiffStringKey | 1235 | tablefloat | 123 |
|---------------+----------------+-----------------+---------------|
|aDiffStrKey | 1236 | tablestring | 123 |
+---------------+----------------+-----------------+---------------+
tablebool
+-------------+-------------+
| id(integer) | value(bool) |
|-------------+-------------|
| 123 | False |
+-------------+-------------+
tablefloat
+-------------+--------------+
| id(integer) | value(float) |
|-------------+--------------|
| 123 | 12.345 |
+-------------+--------------+
tablestring
+-------------+---------------+
| id(integer) | value(string) |
|-------------+---------------|
| 123 | 'text' |
+-------------+---------------+
In case if every tagid has fixed type
create tagid description table
tag descriptors
+---------------+----------------+-----------------+
|assetid (text) | tagid(integer) | tablename(text) |
|---------------+----------------+-----------------|
|someStringValue| 1234 | tablebool |
|---------------+----------------+-----------------|
|aDiffStringKey | 1235 | tablefloat |
|---------------+----------------+-----------------|
|aDiffStrKey | 1236 | tablestring |
+---------------+----------------+-----------------+
and correspodnding data tables
tablebool
+-------------+----------------+-------------+
| id(integer) | tagid(integer) | value(bool) |
|-------------+----------------+-------------|
| 123 | 1234 | False |
+-------------+----------------+-------------+
tablefloat
+-------------+----------------+--------------+
| id(integer) | tagid(integer) | value(float) |
|-------------+----------------+--------------|
| 123 | 1235 | 12.345 |
+-------------+----------------+--------------+
tablestring
+-------------+----------------+---------------+
| id(integer) | tagid(integer) | value(string) |
|-------------+----------------+---------------|
| 123 | 1236 | 'text' |
+-------------+----------------+---------------+
All this is just for general idea. You should adapt it for your needs.
Here is my table in the database :
id | account_name | account_number | account_type | address | email | ifsc_code | is_default_account | phone_num | User
-----+--------------+----------------+--------------+---------+------------------------------+-----------+--------------------+-------------+----------
201 | helloi32irn | 55265766432454 | Savings | | mypal.appa99721989#gmail.com | 5545 | f | 98654567876 | abc
195 | hello | 55265766435523 | Savings | | mypal.1989#gmail.com | 5545 | t | 98654567876 | axyz
203 | what | 01010101010101 | Current | | guillaume#sample.com | 6123 | f | 09099990 | abc
On form submission in the view, which only posts a single parameter which in my case is name= "activate" which corresponds to the column "is_default_account" in the table.
I want to change the value of "is_default_account" from "t" to "f". For example here in the table, for account_name "hello" it is "t". And i want to deactivate it, i.e make it "f" and activate any of the other that has been sent trough the form
This will update your table and make account 'what' default (assuming that is_default_account is BOOLEAN field):
UPDATE table
SET is_default_account = (account_name = 'what')
You may want limit updates if table is more than just few rows you listed, like this:
UPDATE table
SET is_default_account = (account_name = 'what')
WHERE is_default_account != (account_name = 'what')
AND <limit updates by some other criteria like user name>
I think to accomplish what you want to do you should send at least two values from the form. One for the id of the account you want to update and the other for the action (activate here). You can also just send the id and have it toggle. There are many ways to do this but I can't figure out exactly what you are trying to do and whether you want SQL or Playframework code. Without limiting your update in somewhere (like id) you can't precisely control what specific rows get updated. Please clarify your question and add some more code if you want help on the playframework side, which I would think you do.