I'd like to read meta data about our calculation views. (In order to do some best practice checks)
The check I want to make include:
View is a graphical calculation view
View-properties:
Default Client: Cross Client
Execute in SQL Engine
Filter on columns (show if some column names appear)
Read commends (we have a rule which states that each calculation view must have a commend in the semantics, which explains the purpose of the view)
That's possible by querying the _SYS_REPO tables.
I gave a full example answer here https://answers.sap.com/questions/58460/meta-data-sorage-location-of-modelling-views.html?childToView=59915#answer-59915
Related
This feels like it should be a common requirement, but I'm not sure how best to implement the requirement.
I'm going to make up a really simple example. It's similar enough to what I actually need, without getting over-complicated.
Imagine we have a table called transport and it has the following columns:
type
model_name
size
number_of_wheels
fuel
maximum_passenger_count
We want to store all sorts of different types of transportation in this table, but not every type will have values in every column. Imagine the commonality is a lot higher, as this is a bit of fake example.
Here's a few examples of how this might work in practice:
type = cycle, we ban fuel, as it's not relevant for a cycle
type = bus, all columns are valid
type = sledge, we ban number_of_wheels, as sledges don't have wheels, we also ban fuel
type = car, all columns are valid
I want my UI to show a grid with a list of all the rows in the transport table. Users can edit the data directly in the grid and add new rows. When they add a new row, they're forced to pick the transport type in a dropdown before it appears in the grid. They then complete the details. All the values are optional, apart from the ones we explicitly don't want to record a value for, where we expect to not see anything at all.
I can see a number of ways to implement this, but none of them seems like a complete solution:
I could put this logic into the UI, enabling/ disabling grid cells based on type. But there's nothing to stop someone directly inserting data into the "wrong" columns in the backend of the database or via the API, which would then come through into the UI unless I added a rule to also mask out values in disabled cells. Making changes to which columns are restricted per transport type would be very difficult
I could put this logic into the API, raising an error if someone enters data into a cell that should be disallowed. This closes one gap for insertion to the database via the API, but SQL scripts would still allow entry into the "wrong" column. Also, the user experience would suck, as users would have to guess which columns to complete and which to leave blank. It would still be difficult to make changes to which columns are allowed/ restricted
I could add a trigger to the database, maybe to set the values to NULL if they shouldn't be allowed, but this seems clunky and users would not understand what was happening
I could add a generated column, but this doesn't help if I sometimes need to set a value and sometimes don't
I could just allow the unnecessary data to be stored in the database, then hide it by using a view to report it back. It doesn't seem great, as users would still see data disappearing from the UI with no explanation
I could add a second table, storing a matrix of which values are allowed and which are restricted by type. The API, UI and database could all implement this list using different mechanisms - this comes with the advantage of having a single place to make changes that will immediately be reflected across the entire system, but it's a lot of work and I have lots of these tables that work the same way
I have a situation as described in the ExtbaseFluid book:
I would like to store information in the intermediate table which is not recommended at all.
Here is a cite from the warning box of the above linked book chapter:
Do not store data in the Intermediate Table that concern the Domain. Though TYPO3 supports this (especially in combination with Inline Relational Record Editing (IRRE) but this is always a sign that further improvements can be made to your Domain Model. Intermediate Tables are and should always be tools for storing relationships and nothing else.
Let’s say you want to store a CD with its containing music tracks: CD -- m:n (Intermediate Table) -- Song. The track number may be stored in a field of the Intermediate Table. However, the track should be stored as a separate domain object, and the connection be realized as CD -- 1:n -- Track -- n:1 -- Song.
So I want not to do what is not recommended. But thinking about the workflow for the editor that results of the recommended solution rises a few question for me.
To stay with this example I would need the following tables:
tx_extname_domain_model_cd
tx_extname_domain_model_cd_track_mm
tx_extname_domain_model_track (which holds the track number)
tx_extname_domain_model_track_song_mm
tx_extname_domain_model_song
From what I know this would end in the situation that the editor would need to create following records:
one record for the cd
one record for the song
now the editor can create one record for the track.
There the track number is added.
Furthermore the cd record needs to be assigned as well as the song.
So here are my questions:
I guess this workflow cannot be improved with some (to me unknown) TCA setup?
An editor cannot directly reach the song when the cd record is opened?
Instead first she / he has to open the track record and can from there navigate to the song?
Is it really that bad to store data in the intermediate table? The TYPO3 table sys_file_reference does the same!? But I wonder how those data could be shown (because IRRE is not possible because it shall only be used for 1:n relations (source).
The question you have to ask yourself is: Do I want to do coding by the book, or do I want to create a pragmatic approach to solve a customer's problem?
In this specific case the additional problem is, that the people who originally invented Extbase had a quite sophisticated and academic approach, but when it comes to a pragmatic use and performance, they were blocked by their own rules and stuck with coding by the book.
Especially this example and the warning message shows a way of thinking that was one of the reasons, why I never actually used Extbase but went for Core-API methods to create performant and pragmatic queries to get the desired result sets. Now that we've got Doctrine under the hood, this works like a charm even with quite exotic DB flavors.
Of course intermediate tables are a good idea and of course those intermediate tables can and should be enriched with additional data fields, that do not require a 3rd, 4th or nth table to store i.e. a simple set of dropdown options, since this can easily be handled with attributes configured in TCA, as it is shown here: https://docs.typo3.org/m/typo3/reference-tca/master/en-us/ColumnsConfig/Type/Inline/Examples.html
sys_file_reference is the most prominent example since it provides exactly that kind of additional information that should not be pumped into additional tables - and guess what, the TYPO3 core does not make use of a single line of Extbase code to deal with that data or almost any other data of the core tables.
To answer your last question: Take a look at the good old IRRE Tutorial to get a clue how to do m:n connections with intermediate inline tables.
https://docs.typo3.org/typo3cms/extensions/irre_tutorial/0.4.0/Manual/Index.html#intermediate-tables-for-m-n-relations
Depends on the issue, sometimes the intermediate table is an entity, sometimes not. In this example the intermediate table is the track, which would contain: [uid, cd, song, track_no, ... (whatever else needed to define the track)]
Be carefull when you define your data, that you do not make it too advanced.
Using jqGrid free (version 4.15.6) to show very basic information about invoices (ie: date created, date due, client, total, status). The invoices grid only has a few pertinent columns that are displayed because it is just not needed to show more than that. In reality there are a lot of other invoice-related fields that are not shown. I would like to offer end-users the ability to filter the grid based on a lot of these other parameters that are simply not part of the grid contents.
I know jqGrid offers built-in searching, and you can easily just add hidden columns with all the data, but I feel this is not good for us--invoices contain a lot of data--data that is not necessarily present in just the invoices database table. We want the grid to provide many other filtering options outside of the base invoice data but we do NOT want to use the built-in filter options. Instead, I would like to use a separate HTML table with a bunch of search fields that our server-side code would know how to pull back). When one decides to invoke the external filter, we want the grid to load all invoices matching that combined filter. And if one chooses to navigate using the grid's paging buttons, we want the grid to continue using the original external filtering parameters.
Hope this makes sense. Maybe I am just overthinking this but I am fairly certain the grid is designed to use it's built in filtering/searching tools/dialog and I have not found anyway to override this behavior. Actually I have using an older jqGrid but that involved using jQuery to completely REPLACE the default pager with custom HTML and event handling. I never could figure this out with older jqGrid so I chose to write it myself. But that code is less than optimum and even I know it is subject to much criticism. Having upgraded to 4.15.6, I want to do this the best way and I want to keep it logical and practical.
I have tried using beforeRequest() and onPaging() events to change the 'url' parameter, thinking that if I modified the url, I could change the GET to include all of our custom filtering fields. It seems that does not work as the url NEVER changes from the originally defined value. Console logging does show the events firing but no change to url. On top of that, the grid ALWAYS passes its own page field, _search field, etc. to the server so the server NEVER sees the filter request.
How does one define their own custom filtering coupled with paging loader and still take advantage of the built-in paging events? What am I missing?
**** DELETED CODE THAT WAS ADDED TO QUESTION THAT DID NOT PERTAIN TO ORIGINAL QUESTION ISSUE *********
It's difficult to answer on your question because you didn't posted code fragments, which shows how you use jqGrid and because the total number of data, which could be needed to display in all pages isn't known.
In general there are two main alternatives implementing of custom filtering:
server side filtering
client side filtering
One can additionally use a mix from both filtering. For example, one can load from the server all invoices based on some fixed filters (all invoices of specific user or all invoices of one organization, all invoices of the last month) and then use loadonce: true, forceClientSorting: true options to sort and to filter the returned data on the client side. The user could additionally to filter the subset of data locally using filter toolbar of searching dialog.
The performance of client side is essentially improved last years and loading relatively large JSON data from the server could be done very quickly. Because of that Client-Side-Filtering is strictly recommended. For better understanding the performance of local sorting, filtering and paging I'd recommend you to try the functionality on the demo. You will see that the timing of local filtering of the grid with 5000 rows and 13 columns is better as you can expect mostly from the round trip to the server and processing of server side filtering on some very good organized database. It's the reason why I recommend to consider to use client side sorting (or loadonce: true, forceClientSorting: true options) as far it's possible.
If you need to filter data on the server then you need just send additional parameters to the server on every request. One can do that by including additional parameters in postData. See the old answer for additional details. Alternatively one can use serializeGridData to extend/modify the data, which will be set to the server.
After the data are loaded from the server, it could be sorted and filtered locally before the first page of data will be displayed in the grid. To force local filtering one need just add forceClientSorting: true additionally to well known loadonce: true parameter. It force applying local logic on the data returned from the server. Thus one can use postData.filters, search: true to force additional local filtering and sortname and sortorder parameter to force local sorting.
One more important remark about using hidden columns. Every hidden column will force creating DOM elements, which represent unneeded <td> elements. The more DOM elements you place on the page the more slow will be the page. If local data will be used (or if loadonce: true be used) then jqGrid hold data associated with every row twice: once as JavaScript object and once as cells in the grid (<td> elements). Free jqGrid allows to use "additional properties" instead of hidden columns. In the case no data will be placed in DOM of the grid, but the data will be hold in JavaScript objects and one able to sort or filter by additional properties in the same way like with other columns. In the simplest way one can remove all hidden columns and to add additionalProperties parameter, which should be array of strings with the name of additional properties. Instead of strings elements of additionalProperties could be objects of the same structures like colModel. For example, additionalProperties: [{ name: "taskId", sorttype: "integer"}, "isFinal"]. See the demo as an example. The input data of the grid can be seen here. Another demo shows that searching dialog contains additional properties additionally to jqGrid column. The commented part columns of searching shows more advanced way to specify the list and the order of columns and additional properties displayed in searching dialog.
Forgive my answering like this but this question started out on one subject related to filtering and paging but with using an external filtering source. Oleg actually has several demos over many threads that I was able to use to accomplish the custom filtering and maintain default built-in paging. So his answer will be the accepted answer for the original question topic.
But in the solution of original, I encountered another issue with loading the grid initially. I wanted to have the grid load with default filtering values should no other filter already be in place. That really should have been a different question because it really did not affect the first.
I found yet another Oleg reply on a completely different question:
jqGrid - how to set grid to NOT load any data initially?.
Oleg answered that question and that answer solved our second need to load one way, then allow another way.
So, on initial load, we look for the filter params server-side. None given? We pull records using default filtering. Params present? We use initial provided params. The difference with initial loading we do not AJAX exit. We instead json_encode the data and place it in the grid definition as follows:
$('#grd_invoices').jqGrid(
...
url: '{$modulelink}&sm=130',
data: {$json_encoded_griddata},
datatype: 'local',
...
});
Since the datatype is set to 'local', the grid does NOT go to server initially, so the data parameter is used by the grid. Once we are ready to filter, we use Oleg's solution from yet another answer on yet another question to dynamically apply the filter as follows:
var myfilter = { groupOp: 'AND', rules: []};
myfilter.rules.push({field:'fuserid',op:'eq',data:$('#fuserid').val()});
myfilter.rules.push({field:'finvoicenum',op:'eq',data:$('#finvoicenum').val()});
myfilter.rules.push({field:'fdatefield',op:'eq',data:$('#fdatefield').val()});
myfilter.rules.push({field:'fsdate',op:'eq',data:$('#fsdate').val()});
myfilter.rules.push({field:'fedate',op:'eq',data:$('#fedate').val()});
myfilter.rules.push({field:'fwithin',op:'eq',data:$('#fwithin').val()});
myfilter.rules.push({field:'fnotes',op:'eq',data:$('#fnotes').val()});
myfilter.rules.push({field:'fdescription',op:'eq',data:$('#fdescription').val()});
myfilter.rules.push({field:'fpaymentmethod',op:'eq',data:$('#fpaymentmethod').val()});
myfilter.rules.push({field:'fstatus',op:'eq',data:$('#fstatus').val()});
myfilter.rules.push({field:'ftotalfrom',op:'eq',data:$('#ftotalfrom').val()});
myfilter.rules.push({field:'ftotal',op:'eq',data:$('#ftotal').val()});
myfilter.rules.push({field:'fmake',op:'eq',data:$('#fmake').val()});
myfilter.rules.push({field:'fmodel',op:'eq',data:$('#fmodel').val()});
myfilter.rules.push({field:'fserial',op:'eq',data:$('#fserial').val()});
myfilter.rules.push({field:'fitemid',op:'eq',data:$('#fitemid').val()});
myfilter.rules.push({field:'ftaxid',op:'eq',data:$('#ftaxid').val()});
myfilter.rules.push({field:'fsalesrepid',op:'eq',data:$('#fsalesrepid').val()});
var grid = $('#grd_invoices');
grid[0].p.search = myfilter.rules.length>0;
$.extend(grid[0].p.postData,{filters:JSON.stringify(myfilter)});
$('#grd_invoices').jqGrid('setGridParam',{datatype:'json'}).trigger('reloadGrid',[{page:1}]);
This allows us to have the grid show initial data loaded locally, and then subsequent filtering changes the grid datatype to 'json', which forces the grid to go to server with new filter params where it loads the more specific filtering.
Credit goes to Oleg because I used many of his posts from many questions to reach the end result. Thank you #Oleg!
I've got a question for you couchbase pros: Is it possible to synchronize a subset of documents (eg. the documents within a view) with an other bucket?
So that the other bucket documents are always a direct subset of the "master" bucket?
if so, isn't that to much expensive in terms of perfomance? or does couchbase have any functionality to only create deeplinks to the documents instead of copying it?
Alternatively: is it possible to write views on views?
Thank you in advance!
--- EDIT ----
Let's say I want to have two sets (buckets) of documents S1 and S2. S2 is a subset of S1. Each set contains the same views V1, V2 and V3 since I want to be able to query any of them with the same logic/interface. In my case set S2 is build per user/company/store/whatever, in production there should be like 1000ish subsets S2 - to stay abstract let's call them S2a S2b and S2c.
The selection of documents which to be contained in any subset is done by a filtering instance (for example a view). Let's call these filtering instances F1 for filtering S1 to S2 hence F1a, F1b and F1c.
So with my actual knowledge of couchbase this results in the following design/view architecture: I've got the three "base" views to display V1,V2 and V3, and to realize S2a, S2b and S2c I must create the design views S2aV1, S2aV2, S2aV3, S2bV1, S2bV2, etc. (9 Views).
One could say "Well choose your keys wisely and you can avoid the sub views" but in my opinion this isn't that easy because of the following circumstances: In worst case the filter parameters change every minute and contain many WHERE IN constraints which could (at my actual point of view) not be handled efficiently querying k/v lists.
This leads to the following thoughts and the question I initially asked. If I use the same views in any subset (defined by a filter) shouldn't it be possible to build up an entity which helps me handling complex filtering? For example a function which is called during runtime while generating the view output? This could look like /design/view?filter=F1 or something like that.
Or do you have any other ideas to solve this problem? Or should I use SQL since it's more capable of handling frequently changing filters?
Generally speaking for most models you don't really need to have bucket "subsets", is there a particular reason you are trying to do this and why you would want that data broken out? You can also query your views, or instead of a view on a view, you can just make a separate view that maps/filters further based on your needs (i.e. does the same job as a view on a view).
We are working on Elastic Search integration. Maybe better for your use case
I think what you want to do is write a view on your original bucket, and then copy the key/values from that view, to be documents in a new bucket.
It shouldn't be hard to write an automated framework for managing this so that you can keep the derived data up to date in near real time.
What are some possible designs to deal with frequently changing data forms?
I have a basic CRUD web application where the main data entry form changes yearly. So each record should be tied to a specific version of the form. This requirement is kind of new, so the existing application was not built with this in mind.
I'm looking for different ways of handling this, hoping to avoid future technical debt. Here are some options I've come up with:
Create a new object, UI and set of tables for each version. This is obviously the most naive approach.
Keep adding all the fields to the same object and DB tables, but show/hide them based on the form version. This will become a mess after a few changes.
Build form definitions, then dynamically build the UI and store the data as some dictionary like format (e.g. JSON/XML or maybe an document oriented database) I think this is going to be too complex for the scope of this app, especially for the UI.
What other possibilities are there? Does anyone have experience doing this? I'm looking for some design patterns to help deal with the complexity.
First, I will speak to your solutions above and then I will give my answer.
Creating a new table for each
version is going to require new
programming every year since you will
not be able to dynamically join to
the new table and include the new
columns easily. That seems pretty obvious and really makes this a bad choice.
The issues you mentioned with adding
the columns to the same form are
correct. Also, whatever database you
are using has a max on how many
columns it can handle and how many
bytes it can have in a row. That could become another concern.
The third option I think is the
closest to what you want. I would
not store the new column data in a
JSON/XML unless it is for duplication
to increase speed. I think this is
your best option
The only option you didn't mention
was storing all of the data in 1
database field and using XML to
parse. This option would make it
tough to query and write reports
against.
If I had to do this:
The first table would have the
columns ID (seeded), Name,
InputType, CreateDate,
ExpirationDate, and CssClass. I
would call it tbInputs.
The second table would have the have
5 columns, ID, Input_ID (with FK to
tbInputs.ID), Entry_ID (with FK to
the main/original table) value, and
CreateDate. The FK to the
main/original table would allow you
to find what items were attached to
what form entry. I would call this
table tbInputValues.
If you don't
plan on having that base table then
I would use a simply table that tracks the creation date, creator ID,
and the form_id.
Once you have those you will just need to create a dynamic form that pulls back all of the inputs that are currently active and display them. I would put all of the dynamic controls inside of some kind of container like a <div> since it will allow you to loop through them without knowing the name of every element. Then insert into tbInputValues the ID of the input and its value.
Create a form to add or remove an
input. This would mean you would
not have much if any maintenance
work to do each year.
I think this solution may not seem like the most eloquent but if executed correctly I do think it is your most flexible solution that requires the least amount of technical debt.
I think the third approach (XML) is the most flexible. A simple XML structure is generated very fast and can be easily versioned and validated against an XSD.
You'd have a table holding the XML in one column and the year/version this xml applies to.
Generating UI code based on the schema is basically a bad idea. If you do not require extensive validation, you can opt for a simple editable table.
If you need a custom form every year, I'd look at it as kind of a job guarantee :-) It's important to make the versioning mechanism and extension transparent and explicit though.
For this particular app, we decided to deal with the problem as if there was one form that continuously grows. Due to the nature of the form this seemed more natural than more explicit separation. We will have a mapping of year->field for parts of the application that do need to know which data is for which year.
For the UI, we will be creating a new page for each year's form. Dynamic form creation is far too complex in this situation.