I am trying to make a database with a plan table.|
The plans are: Basic, Express, Advanced, and Professional
Each of them will have a pricetag and support plans like PHP, ASP and SSL support.
Basic: Express
price - 69,00 89,00
setup - 0,00 0,00
SSL - X Yes
PHP - X Yes
ASP - X X
How am I able to make an ERD without causing redundancy.
I thought about making a table like:
Table Plans:
ID Pname Pprice Psetup Pasp Pphp Pssl
0 Basic 69,00 0,00 0 0 0
1 Express 89,00 0,00 0 1 1
etc..
The plans will be expanded with more types. But eventually that will make the table too large so I thought about creating another table.
Table Plans:
ID Pname Pprice Psetup Plansbool
0 Basic 69,00 0,00 0
1 Express 89,00 0,00 1
Table Plansbool:
ID Bname YesNo PlansID
0 Php 0 0
1 Php 1 1
2 Asp 0 0
3 Asp 0 1
4 Ssl 0 0
5 Ssl 1 1
But this also creates the problem that in the table "plans", the plans can only accept one of the types in the Plansbool table.. I also think this is redundant and right now I can't see the big picture of creating a non redundant plan table with the according types of support..
I'm sorry if this is really easily solved or to confusing.
In your idea how to solve this is no redundancy.
Redundancy would be created when you are able to find the same data by using two different ways.
I am confused by what "plans" are. Are plans Basic, Express and so on, or are plans PHP, ASP... ?
I think you meant that PHP, ASP and so on are types (because in the first sentence you called them both plans (Basic, Express, ... and PHP, ASP, ...)
But make a table with the plans (Basic, Express, ...) and so on and another table with the types (PHP, ASP, ...)
Than make a many to many - connection and just resolve it with the attribut "Price" and "Stetup".
And the columns:
Plan (PlanID, Name)
PlanType (PlanID, TypeID, Price, Setup)
Type (TypeID, Name)
And just add a "PlanType" when there is a plan for a type :-)
Related
I have a project where all the data is going to one Postgres big table like:
Time
ItemID
ItemType
Value
2021-09-16
1
A
2
2021-09-16
2
B
3
2021-09-17
3
A
3
My issue is this table is becoming very large. Since there are only 2 ItemTypes, I'd like to have MyTableA and MyTableB and the have a 3rd table with a one-to-one mapping of ItemID to ItemType.
What is the most performant way to insert the data and redirect to the respective table? I am currently thinking about creating a view with an INSTEAD OF trigger and then using two insert statements with joins to get the desired filtering. Is there a better way? Perhaps maintaining the ItemID_A/B in a array somewhere? Or should I figure out a way to do this client side?
I am writing dynamic sql code and it would be easier to use a generic where column in (<comma-seperated values>) clause, even when the clause might have 1 term (it will never have 0).
So, does this query:
select * from table where column in (value1)
have any different performance than
select * from table where column=value1
?
All my test result in the same execution plans, but if there is some knowledge/documentation that sets it to stone, it would be helpful.
This might not hold true for each and any RDBMS as well as for each an any query with its specific circumstances.
The engine will translate WHERE id IN(1,2,3) to WHERE id=1 OR id=2 OR id=3.
So your two ways to articulate the predicate will (probably) lead to exactly the same interpretation.
As always: We should not really bother about the way the engine "thinks". This was done pretty well by the developers :-) We tell - through a statement - what we want to get and not how we want to get this.
Some more details here, especially the first part.
I Think this will depend on platform you are using (optimizer of the given SQL engine).
I did a little test using MySQL Server and:
When I query select * from table where id = 1; i get 1 total, Query took 0.0043 seconds
When I query select * from table where id IN (1); i get 1 total, Query took 0.0039 seconds
I know this depends on Server and PC and what.. But The results are very close.
But you have to remember that IN is non-sargable (non search argument able), it will not use the index to resolve the query, = is sargable and support the index..
If you want the best one to use, You should test them in your environment because they both work so good!!
Is anybody tell me that if the database structure I create a single table to store multiple banners and templates for different event types like below is good database system. when we know will keep updating adding removing banners and template from admin side. Will it fulfill relational database and normalization rules.Also which approach is better for less query execution time.
A)table1
Event_type_id Key value
1 Small_banners Baneer1;banner2;banner3
2 Template Temp1;temp2;temp3
.....
……
Or instead create 2 tables like below is good approach
B) - banner table
<p>Id event_type_id value</p>
1 1 baner_1.jpg
2 1 baner_3.jpg
3 2 banner_4.jpg
.....
And second table for template
-Template table
Id event_type_id value
1 1 temp_1.jpg
2 1 temp_2.jpg
3 2 temp_3.jpg
......
C) One more thing having 50s of rows in a single table is better or we should split it in multiple tables.
Please suggest with reasons.
Going by rules of normalisation, it is always good to keep different table instead of inserting comma separated values in a single field.
But new database like Postgres supports json data to be stored in fields and also querying them.
We are looking into a more powerful way of collecting and processing data to be processed in our reports. For one advanced report on a big database, we need to run two indepedent SQL queries (on the same data source) and combine them afterwards.
Query1 returns:
user id#1 ... 3 columns
user id#2 ... 3 columns
user id#4 ... 3 columns
Query 2 returns:
user id#1 ... 5 columns
user id#3 .. 5 columns
user id#4 ... 5 columns
What we want to show:
user id#1 ... 3 columns + 5 columns
user id#2 ... 3 columns
user id#3 ... 5 columns
user id#4 ... 3 columns + 5 columns
Although it's counter-intuitive, we found that combining the results from both queries in SQL leads to considerably worse runtime of the SQL query.
We have looked at subdatasets, but from my understanding it's not possible to mix the data from two subdatasets (or the main data+one subdataset) in a single table.
We have looked at subreports, but from my understanding a subreport will call the query once for each row in the report, if I put the subreport in the Details area as we intend to. But for performance reasons we want to run the two queries that we prepared, and each only once.
We think the most reasonnable approach is for us to write such advanced reports in Java, and it's possible, however the JavaBean data source cannot access the report parameters. Our database is huge and therefore we can't just make queries without where and filter afterwards, the Java code needs access to the report parameters.
We are currently looking into implementing JRQueryExecutor as recommended there and there (last comment), or even taking advantage of scriptlets.
But it sounds really quite advanced and we are wondering are we thinking the wrong way or heading in the wrong direction? And if JRQueryExecutor is the correct way any example or documentation would be welcome.
We are also considering trying to refactor our SQL to achieve the result with only one query, but we do feel that the reporting system ought to allow us to manipulate the data also in Java.
In the end we made it with a scriptlet. In afterReportInit, inheriting JRDefaultScriptlet you get the parameters and the data source from parametersMap, and you can then fill in the data source from Java.
2 stored procedures are developed by .net developers. which are giving same record counts when you pass the same parameter?
now due to some changes , we are getting mismatch record count i.e
if first stored procedure is giving 2 records for a paramemter , the second SP is giving only 1 record.
to find this i followed the approach like
i verified
i counted total records of a table after joining
total tables used in joining
3.distinct / group by is used in 2 tables or not?
finally i am not able to find the issue.
how do i fix it?
could any body share some ideas.
thanks in advance?
Assuming the same JOINs and filters, then the problem is NULLs.
That is, either
A WHERE clause has a direct NULL comparison which will fail
A COUNT is on a nullable column. See Count(*) vs Count(1) for more
Either way, why do you have the same very similar stored procedures written by 2 different developers, that appear to have differences?