how to migrate relational tables to dynamoDB table - postgresql

I am new at DynamoDB, in my current project, I am trying to migrate most relational tables to Dynamo DB. I am facing a tricky scenario which I don't know how to solve
In Posgresql, 2 tables:
Student
id | name | age | address | phone
---+--------+-----+---------+--------
1 | Alex | 18 | aaaaaa | 88888
2 | Tome | 19 | bbbbbb | 99999
3 | Mary | 18 | ccccc | 00000
4 | Peter | 20 | dddddd | 00000
Registration
id | class | student | year
---+--------+---------+---------
1 | A1 | 1 | 2018
2 | A1 | 3 | 2018
3 | A1 | 4 | 2017
4 | B1 | 2 | 2018
My query:
select s.id, s.name, s.age, s.address, s.phone
from Registration r inner join Student s on r.student = s.id
where r.class = 'A1' and r.year = '2018'
Result:
id | name | age | address | phone
---+--------+-----+---------+--------
1 | Alex | 18 | aaaaaa | 88888
3 | Mary | 18 | ccccc | 00000
So, how can I design the dynamoDB table to achieve this result? in extend for CRUD
Any advice is appreciated

DynamoDB table design is going to depend largely on your access patterns. Without knowing the full requirements and queries needed by your app, it's not going to be possible to write a proper answer. But given your example here's a table design that might work:
| (GSI PK) |
(P. Key) | (Sort) | (GSI Sort)
studentId | itemType | name | age | address | phone | year
----------+----------+--------+-----+---------+-------+------
1 | Details | Alex | 18 | aaaaaa | 88888 |
1 | Class_A1 | | | | | 2018
2 | Details | Tome | 19 | bbbbbb | 99999 |
2 | Class_B1 | | | | | 2018
3 | Details | Mary | 18 | ccccc | 00000 |
3 | Class_A1 | | | | | 2018
4 | Details | Peter | 20 | dddddd | 00000 |
4 | Class_A1 | | | | | 2017
Note the global secondary index with the partition key on the item type and the sort key on the year.
With this design we have a few query options:
1) Get student for a given id: GetItem(partitionKey: studentId, sortkey: Details)
2) Get all classes for a given student id: Query(partitionKey: studentId, sortkey: STARTS_WITH("Class"));
3) Get all students in class A1 and year 2018: Query(GSI partitionkey: "Class_A1", sortkey: equals(2018))
For global secondary indexes, the partition and sort key don't need to be unique therefore you can have many Class_A1, 2018 combos. If you haven't already read the Best Practices for DyanmoDB I highly recommend reading it in full.

Related

Is there a V-lookup effect in Microsoft Access?

I am a novice self-teaching Microsoft Access.
I have an MS Access database with a table of students (Table1).
Table1
+----+-----------+----------+------------+------------+
| id | firstname | lastname | Year_Group | Form_Group |
+----+-----------+----------+------------+------------+
| 2 | mnb | nbgfv | 7 | 1 |
| 3 | jhg | uhgf | 8 | 2 |
| 4 | poi | ijuy | 9 | 2 |
| 5 | tgf | tgfd | 10 | 2 |
| 6 | wer | qwes | 11 | 2 |
+----+-----------+----------+------------+------------+
Every day students days are recorded sort of like Table2.
Table2
+----------+----+-----------+----------+------------+--------+-----------+----------+
| Date | id | firstname | lastname | Year_Group | Effort | Behaviour | Homework |
+----------+----+-----------+----------+------------+--------+-----------+----------+
| 28/02/19 | 2 | mnb | nbgfv | 7 | Good | Good | Y |
| 28/02/19 | 3 | jhg | uhgf | 8 | OK | OK | Y |
| 28/02/19 | 4 | poi | ijuy | 9 | Bad | Bad | N |
| 01/03/19 | 5 | tgf | tgfd | 10 | Good | OK | Y |
| 01/03/19 | 6 | wer | qwes | 11 | Good | Good | Y |
+----------+----+-----------+----------+------------+--------+-----------+----------+
Is there a way (when using a list box or combo box) to select a student from Table1 so that their information is used for the corresponding columns in Table2?
Or is there a more efficient way to do this?
Firstly, you should normalise your data.
Currently, you are repeating the firstname, lastname, and Year_Group data in two separate tables, which not only bloats your database, but also means that such data must be maintained in two separate places, potentially leading to inconsistencies and then uncertainty as to which is the master.
Instead, I would suggest that your Students table should contain all information pertaining to the characteristics of a student:
Students
+----+-----------+----------+------------+------------+
| id | firstname | lastname | Year_Group | Form_Group |
+----+-----------+----------+------------+------------+
| 2 | mnb | nbgfv | 7 | 1 |
| 3 | jhg | uhgf | 8 | 2 |
| 4 | poi | ijuy | 9 | 2 |
| 5 | tgf | tgfd | 10 | 2 |
| 6 | wer | qwes | 11 | 2 |
+----+-----------+----------+------------+------------+
And the information pertaining to each school day should only reference the student ID in the Students table:
SchoolDays
+----------+----+--------+-----------+----------+
| Date | id | Effort | Behaviour | Homework |
+----------+----+--------+-----------+----------+
| 28/02/19 | 2 | Good | Good | Y |
| 28/02/19 | 3 | OK | OK | Y |
| 28/02/19 | 4 | Bad | Bad | N |
| 01/03/19 | 5 | Good | OK | Y |
| 01/03/19 | 6 | Good | Good | Y |
+----------+----+--------+-----------+----------+
Then, if you want to display the data in its entirety, you would use a query which joins the two tables, e.g.:
select
t2.date,
t1.firstname,
t1.lastname,
t1.year_group,
t2.effort,
t2.behaviour,
t2.homework
from
students t1 inner join schooldays t2 on t1.id = t2.id

how to flatten rows to columns in postgreSQL

using postgresql 9.3 I have a table that shows indivual permits issued across a single year below:
permit_typ| zipcode| address| name
-------------+------+------+-----
CONSTRUCTION | 20004 | 124 fake streeet | billy joe
SUPPLEMENTAL | 20005 | 124 fake streeet | james oswald
POST CARD | 20005 | 124 fake streeet | who cares
HOME OCCUPATION | 20007 | 124 fake streeet | who cares
SHOP DRAWING | 20009 | 124 fake streeet | who cares
I am trying to flatten this so it looks like
CONSTRUCTION | SUPPLEMENTAL | POST CARD| HOME OCCUPATION | SHOP DRAWING | zipcode
-------------+--------------+-----------+----------------+--------------+--------
1 | 2 | 3 | 5 | 6 | 20004
1 | 2 | 3 | 5 | 6 | 20005
1 | 2 | 3 | 5 | 6 | 20006
1 | 2 | 3 | 5 | 6 | 20007
1 | 2 | 3 | 5 | 6 | 20008
have been trying to use Crosstab but its a bit above my rusty SQL experiance. anybody have any ideas
I usually approach this type of query using conditional aggregation. In Postgres, you can do:
select zipcode,
sum( (permit_typ = 'CONSTRUCTION')::int) as Construction,
sum( (permit_typ = 'SUPPLEMENTAL')::int) as SUPPLEMENTAL,
. . .
from t
group by zipcode;

What is the proper approach to insert into multiple tables at once?

For example I have a table called product_list, which holds a list of products:
+----+-------+-----------+-------------+--+
| id | name | weight(g) | type | |
+----+-------+-----------+-------------+--+
| 1 | Shirt | 157 | Clothes | |
+----+-------+-----------+-------------+--+
| 2 | Ring | 53 | Accessories | |
+----+-------+-----------+-------------+--+
| 3 | Pants | 202 | Clothes | |
+----+-------+-----------+-------------+--+
and a table called product_price:
+----------+----+-------+--------+--+
| price_id | id | name | price | |
+----------+----+-------+--------+--+
| 1 | 1 | Shirt | 99.00 | |
+----------+----+-------+--------+--+
| 2 | 2 | Ring | 149.00 | |
+----------+----+-------+--------+--+
| 3 | 3 | Pants | 119.00 | |
+----------+----+-------+--------+--+
If I insert 1 row of data into product_list, part of the data (such as product_id & product name) should also be inserted in another table like product_price which holds the price for all products (new products would have 0 or NULL values for their price). Eg:
product_list:
+----+--------+-----------+-------------+--+
| id | name | weight(g) | type | |
+----+--------+-----------+-------------+--+
| 1 | Shirt | 157 | Clothes | |
+----+--------+-----------+-------------+--+
| 2 | Ring | 53 | Accessories | |
+----+--------+-----------+-------------+--+
| 3 | Pants | 202 | Clothes | |
+----+--------+-----------+-------------+--+
| 4 | Shirt2 | 175 | Clothes | |
+----+--------+-----------+-------------+--+
product_price:
+----------+----+-------+--------+--+
| price_id | id | name | price | |
+----------+----+-------+--------+--+
| 1 | 1 | Shirt | 99.00 | |
+----------+----+-------+--------+--+
| 2 | 2 | Ring | 149.00 | |
+----------+----+-------+--------+--+
| 3 | 3 | Pants | 119.00 | |
+----------+----+-------+--------+--+
| 4 | 4 | Shirt2| 0.00 | |
+----------+----+-------+--------+--+
My question here is the method in approaching this. What is the proper way (in a professional manner) would an experienced person approach this matter?
These are 2 approaches I have in mind:
1 - Using triggers to insert into the other tables like product_price,etc whenever I insert a product data into product_list
2 - Using a function (stored procedure) like product_add to add a new product into each tables.
Which method is better? Or if there a better suggestion, then I'd like to know about it. Thanks in advance.
TLDR: Should I use Triggers or instead use Stored Procedures, which is better? Or you have a better suggestion?
In Postgres, you can use CTEs:
with pl as (
insert into product_list(name, weight, type)
select . . .
returning *
)
insert into product_price(id, price)
select id, NULL
from pl;
Note: You shouldn't repeat the name column in the product_list and product_price table. It should only be in the list table.

Merge multiple tables with a common column name

I am trying to merge multiple tables that have a common column name which need not have the same values across the tables. For ex,
-tmp1-
id dat
1 234
2 432
3 412
-tmp2-
id nom
1 jim
2
3 ryan
4 jack
-tmp3-
id pin
1 gi23
2 x4ed
3 yit42
8 hiu11
If above are the input, the output needs to be,
id dat nom pin
1 234 jim gi23
2 432 x4ed
3 412 ryan yit42
4 jack
8 hiu11
Thanks in advance.
postgresql 8.2.15 on greenplum from R(pass-through queries)
use FULL JOIN ... USING (id) syntax.
please see example: http://sqlfiddle.com/#!12/3aff2/1
this is how diffrent join types work (provided that tab1.row3 meets joining condition with tab2.row1, and tab1.row3 meets tab2.row2):
| tab1 | | tab2 | | JOIN | | LEFT JOIN | | RIGHT JOIN | | FULL JOIN |
-------- -------- ------------------------- ------------------------- ------------------------- -------------------------
| row1 | | tab1.row1 | | tab1.row1 |
| row2 | | tab1.row2 | | tab1.row2 |
| row3 | | row1 | | tab1.row3 | tab2.row1 | | tab1.row3 | tab2.row1 | | tab1.row3 | tab2.row1 | | tab1.row3 | tab2.row1 |
| row4 | | row2 | | tab1.row4 | tab2.row2 | | tab1.row4 | tab2.row2 | | tab1.row4 | tab2.row2 | | tab1.row4 | tab2.row2 |
| row3 | | tab2.row3 | | tab2.row3 |
| row4 | | tab2.row4 | | tab2.row4 |

Typo3 TCA custom table

I have this situation, I have one offer, and that offer have n number of dates, and n number of options. So I have two additional tables for offer. And third one, which is a price, but price depends of date, and offer. And it is like this:
| | date 1 | date 2 | date 3 |
| offer 1 | price 11 | price 12 | price 13 |
| offer 2 | price 21 | price 22 | price 23 |
| offer 3 | price 31 | price 32 | price 33 |
Is there any way to create TCA custom field to insert all of this Price values at once?
So, basically I need one table with input fields and to store also uid of date and offer in it as reference.
Make more than one table... Tables with dynamic col count are horrible bad to maintain.
Table Offer:
uid | Name | Desc
1 | offer1 | This is some cool shit
2 | offer2 | dsadsad
3 | offer3 | sdadsdsadsada
Table Date:
uid | date
1 | 12.02.2014
2 | 12.03.2014
3 | 20.03.2014
Table Prices:
uid | date | offer | price
1 | 1 | 1 | price11
2 | 1 | 2 | price21
3 | 1 | 3 | price31
4 | 2 | 1 | price12
5 | 2 | 2 | price22
6 | 2 | 3 | price32
7 | 3 | 1 | price13
8 | 3 | 2 | price23
9 | 3 | 3 | price33
And then its straight forward...