I'm looking for a way to merge cells, but only when a condition is true. Other suggestions to my problem are fine, too.
Background: I need to create a Jasper report for which I got a design/layout specification. All data is provided through a single stored procedure.
The layout is mostly a simple table with data, but some rows differ from the rest and contain some sort of interim report data (that's not calculated from the previous values). Those rows also differ in the row layout. Number of rows before and after are dynamic.
Example:
------------------------------------
| data | data | data | data | data |
------------------------------------
| data | data | data | data | data |
------------------------------------
| data | data | data | data | data |
------------------------------------
| some text | abc | def |
------------------------------------
| data | data | data | data | data |
------------------------------------
| different text | xyz |
------------------------------------
The procedure delivers all of this data in a single data set, including the text of those special rows. For those cell that should be merge with their left adjacent cell the procedure returns NULL, all other cells always contain some sort of data.
Now I could need some help to actually merge those cells. If there are other/better ways to achieve the given layout, feel free to suggest those.
Unfortunately I have no control over the stored procedure, but slight alterations might be possible.
Related
I have a db that contains username with 3 different phone numbers and 3 different ids. also we will have 3 different type of notes for each username.
I am using postgres and data are planned to increase to millions of rows. querying and inserting new data process are really important to be fastest way.
which schema would be better for that:
username | no1(string) | no2(string) | no3(string) | id1(string) | id2(string) | | id3(string) | note1(string) | note2(string) | note3(string)
OR
username | no(JSON) | id(JSON) | note(JSON)
I have been searching for a way to combine two or more rows of one table in a database into one row.
I am currently creating multiple web-based forms that connect to one table in my database. Is there any way to write some mysql and php code that will take separate form submissions and put them into one row of the database instead of multiple rows?
Here is an example of what is going into the database:
This is all in one table with three rows.
Form_ID represents the three different forms that I used to insert the data into the table.
Form_ID | Lot_ID| F_Name | L_Name | Date | Age
------------------------------------------------------------
1 | 1 | John | Evans | *NULL* | *NULL*
-------------------------------------------------------------
2 |*NULL* | *NULL* | *NULL* | 2017-07-06 | *NULL*
-------------------------------------------------------------
3 |*NULL* | *NULL* | *NULL* | *NULL* | 22
This is an example of three separate forms going into one table. Every time the submit button is hit the data just inserts down to the next row of information.
I need some sort of join or update once the submit button is hit to replace the preceding NULL values.
Here is what I want to do after the submit button is hit:
I want it to be combined all into one row but still in one table
Form_ID is still the three separate forms but only in one row now.
Form_ID |Lot_ID | F_Name | L_Name | Date | Age
----------------------------------------------------------
1 | 1 | John | Evans | 2017-07-06 | 22
My goal is once a one form has been submitted I want the next, different form submission to replace the NULL values in the row above it and so on to create a single row of information.
I found a way to solve this issue. I used UPDATE tablename SET columname = newColumnName WHERE Form_ID = newID
So this way when I want to update rows that have blanks spaces I have it finding the matching ID's
I migrated from Drive tables to a 2nd gen MySQL Google Cloud SQL data model. I was able to insert 19 rows into the following Question table in AppMaker:
+-------------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+--------------+------+-----+---------+-------+
| SurveyType | varchar(64) | NO | PRI | NULL | |
| QuestionNumber | int(11) | NO | PRI | NULL | |
| QuestionType | varchar(64) | NO | | NULL | |
| Question | varchar(512) | NO | | NULL | |
| SecondaryQuestion | varchar(512) | YES | | NULL | |
+-------------------+--------------+------+-----+---------+-------+
I queried the data from the command line and know it is good. However, when I query the data in AppMaker like this:
var newQuery = app.models.Question.newQuery();
newQuery.filters.SurveyType._equals = surveyType;
newQuery.sorting.QuestionNumber._ascending();
var allRecs = newQuery.run();
I get 19 rows with the same data (the first row) instead of the 19 different rows. Any idea what is wrong? Additionally (and possibly related) my list rows in AppMaker are not showing any data. I did notice that _key is not being set correctly in the records.
(Edit: I thought maybe having two columns as the primary key was the problem, but I tried having the PK be a single identity column, same result.)
Thanks for any tips or pointers.
You have two primary key fields in your table, which is problematic according to the App Maker Cloud SQL documentation: https://developers.google.com/appmaker/models/cloudsql
App Maker can only write to tables that have a single primary key
field—If you have an existing Google Cloud SQL table with zero or
multiple primary keys, you can still query it in App Maker, but you
can't write to it.
This may account for the inability of the view to be able to properly display each row and to properly set the _key.
I was able to get this to work by creating the table inside AppMaker rather than using a table created directly in the Cloud Shell. Not sure if existing tables are not supported or if there is a bug in AppMaker, but since it is working I am closing this.
I have two tables on a report. Across the two tables, I need 8 total columns. The tables have the same data; only the columns shown are different. I need the first table drawn first, and then the next table after that on a new page.
Here's the report structure I would like...
Page 1:
Properties Table
Id | Name | etc...
--------------------
1 | Bob | ...
2 | Matt | ...
3 | John | ...
...
After Properties table, on a new page:
Relationships Table
Id | Relationships | etc...
-----------------------------
1 | Matt | ...
2 | Bob | ...
3 | (NULL) | ...
...
Assuming the data source is the same for both tables (it returns all 8 columns I need), how can I achieve this on the report?
I believe the idea is to use subreports, but I can't find a mechanism to pass in the data from the main report.
Is there a simple (ie. non-hacky) and race-condition free way to create a partitioned sequence in PostgreSQL. Example:
Using a normal sequence in Issue:
| Project_ID | Issue |
| 1 | 1 |
| 1 | 2 |
| 2 | 3 |
| 2 | 4 |
Using a partitioned sequence in Issue:
| Project_ID | Issue |
| 1 | 1 |
| 1 | 2 |
| 2 | 1 |
| 2 | 2 |
I do not believe there is a simple way that is as easy as regular sequences, because:
A sequence stores only one number stream (next value, etc.). You want one for each partition.
Sequences have special handling that bypasses the current transaction (to avoid the race condition). It is hard to replicate this at the SQL or PL/pgSQL level without using tricks like dblink.
The DEFAULT column property can use a simple expression or a function call like nextval('myseq'); but it cannot refer to other columns to inform the function which stream the value should come from.
You can make something that works, but you probably won't think it simple. Addressing the above problems in turn:
Use a table to store the next value for all partitions, with a schema like multiseq (partition_id, next_val).
Write a multinextval(seq_table, partition_id) function that does something like the following:
Create a new transaction independent on the current transaction (one way of doing this is through dblink; I believe some other server languages can do it more easily).
Lock the table mentioned in seq_table.
Update the row where the partition id is partition_id, with an incremented value. (Or insert a new row with value 2 if there is no existing one.)
Commit that transaction and return the previous stored id (or 1).
Create an insert trigger on your projects table that uses a call to multinextval('projects_table', NEW.Project_ID) for insertions.
I have not used this entire plan myself, but I have tried something similar to each step individually. Examples of the multinextval function and the trigger can be provided if you want to attempt this...