I have table TestTable
ID Name value
-- ------ -----
1 SOUMYA 10
1 SOUMYA 8
I want to write a query in DB2 which will return.
ID Name Oldvalue NewValue Difference
--------------------------------------
1 SOUMYA 10 8 2
How do you want to retrieve the old value? where is it stored?
In DB2 there is a feature called Time Travel, that includes the old values. This is specific for db2 since version 10.1. There are different ways to write that queries using Time Travel.
Please check this video for more information: https://www.youtube.com/watch?v=7JrQdzdYwOA
You can also create a trigger in the value to log the old and new values in another table, and they just query the other table. However, the time problem arrives here, and that is the reason Time Travel exists.
Related
I got a influx db table consisting of
> SELECT * FROM results
name: results
time artnum duration
---- ------ --------
1539084104865933709 1234 34
1539084151822395648 1234 81
1539084449707598963 2345 56
1539084449707598123 2345 52
and other tags. Both artnum and duration are fields (that is changeable though). I'm now trying to create a query (to use in grafana) that gives me the following result with a calculated mean() and the number of measurements for that artnum:
artnum mean_duration no. measurements
------ -------- -----
1234 58 2
2345 54 2
First of all: Is it possible to exclude the time column? Secondly, what is the influx db way to create such a table? I started with
SELECT mean("duration"), "artnum" FROM "results"
resulting in ERR: mixing aggregate and non-aggregate queries is not supported. Then I found https://docs.influxdata.com/influxdb/v1.6/guides/downsampling_and_retention/, which looked like what I wanted to do. I then created a infinite retention policy (duration 0s) and a continuous query
> CREATE CONTINUOUS QUERY "cq" ON "test" BEGIN
SELECT mean("duration"),"artnum"
INTO infinite.mean_duration
FROM infinite.test
GROUP BY time(1m)
END
I followed the instructions, but after I fed some data to the db and waited for 1m, `SELECT * FROM "infinite"."mean_duration" did not return anything.
Is that approach the right one or should I continue somewhere else? The very goal is to see the updated table in grafana, refreshing once a minute.
InfluxDB is a time series database, so you really need the time dimension - also in the response. You will have a hard time with Grafana if your query returns non time series data. So don't try to remove time from the query. Better option is to hide time in the Grafana table panel - use column styles and set Type: Hidden.
InfluxDB doesn't have a tables, but measurements. I guess you need query with proper grouping only, no advance continous queries, etc.. Try and improve this query*:
SELECT
MEAN("duration"),
COUNT("duration")
FROM results
GROUP BY "artnum" fill(null)
*you may have a problem with grouping in your case, because artnum is InfluxDB field - better option is to save artnum as InfluxDB tag.
Wondering the best SQL to handle below situation: Client only wants to see invoices that have been declined. I started with only show me when STATUS_ID = 2, but then realized that it was paid as it was resubmitted and accepted so that didn't work. What is the best way to handle 2 records like below where I don't want the SQL to return any records if manifest + order code have a 1. Would you do a Min on Status ID or something of that nature?
VENDOR NAME manifest ORDER_CODE STATUS_ID
VENDOR 12345 BHGSDKJF1234 RU07 2 (invoice decline)
VENDOR 12345 BHGSDKJF1234 RU07 1 (paid)
This trick can be work for you in this case, but it's not solve the general case (what happens if the STATUS_ID for paid is 3, and all possible values are 0-5?)
you can use in general SWICH-CASE clause, that gives you some 1 (true) if the client has STATUS_ID = 1, and 0 otherwise. Then, pick the MAX() for each invoice.
You can also consider another design that might work for you:
Add time\time-stamp column (Maybe, for your purpose, you can use SYSDATE time for insertion time of the record to db).
After you have a time column, you probably can choose the columns with the last time STATUS_ID for each invoice (get the STATUS_ID in the row with the max time).
I'm working in Pentaho 4.4.1-GA (Kettle / PDI). The database is Postgres.
I need to be able to insert multiple records into a fact table based on the fields that come from a single record. The single record contains fields:
productcode1, price1
productcode2, price2
productcode3, price3
...
productcode10,price10
So if there was a value for each of the 10 productcode / prices then I'd need to insert a total of 10 records into the fact table. If there were values for 4 of the combinations, then I'd need to insert 4 records into the fact table, etcetera. All field values for the fact records would be identical except for the PK (generated by sequence), product codes, and prices.
I figure that I need some type of looping construct which would let me check whether or not a value was present for each productx field, and if so, do an insert/update step on the fact table with the desired field values. I'm just not sure how to do this in Pentaho.
Any ideas? All suggestions are welcome :)
Thank You,
Rakesh
Could you give a sample input and output for your scenario??
From your example data I can infer that if there are 10 different product codes and only 4 product prices you want to have 4 records inserted into your table. Is that so?
Well for a start you can add a constant value of 1 to those records by filtering for NOT NULL and then use an Group BY Step to count the number of 1's. This would give you the count. BTW it would be helpful if you could provide more details on what columns you would be loading as there are ways to make a PDI transformation execute multiple times
Im working on an iPhone application, Using SQLite as the backend
In my application i have two tables, ProjectsTable, Project_Hours_Table in my DB.
Here in first table i have list of projects with their unique_id.
In second table i have project ID's (as foreign key) and the respective working hours.
Now my requirement is to get the project_ID's and the sum of that particular project working hours.
as follows
Project_ID Hours
1 1
2 0
3 5 (1+4)
4 11 (5+6)
5 2
6 13 (7+6)
7 3
can any one please provide suggestions to implement the query
thanks in advance
select Project_Id,
sum(Hours) as Hours
from Proj_HoursTable
group by Project_Id
Since both of the required fields are available in Proj_HoursTable, no need to join to ProjectTable. Adding the project name to the output would require a join to ProjectTable (assumes project id and project name are 1 to 1
select pjt.ProjName,
pht.Project_Id,
sum(pht.Hours) as Hours
from Proj_HoursTable pht join ProjectsTable pjt
on pjt.Proj_Id = pht.Project_Id
group by pht.Project_Id, pjt.ProjName
First You need to display Proj_Id for particular project (That you want to show total hour) and store it in variable.
And then after write query for this
select * from Project_Hours_Table where Proj_Id = Proj_Id
This query return all data of specific Proj_Id that you need.
and store it data to NSMutableDictionary, after you need to get value of Hour field by your dictionary key #"hour"
you return this value as NSSting and you need to convert it in intValue by
int hourCount = [StringHourValue intValue];
and count it it will give you total number of Hour
Yes I know, this question has been asked MANY times but after reading all the posts I found that there wasn't an answer that fits my need. So, Heres my question. I would like to take a column of values and pivot them into rows of 6 columns.
I want to take this...... And turn it into this.......................
G Letter Date Code Ammount Name Account
081278 G 081278 12 00123535 John Doe 123456
12
00123535
John Doe
123456
I have 110000 values in this one column in one table called TempTable. I need all the values displayed because each row is an entity to itself. For instance, There is one unique entry for all of the Letter, Date, Code, Ammount, Name, and Account columns. I understand that the aggregate function is required but is there a workaround that will allow me to get this desired result?
Just use a MAX aggregate
If one row = one column (per group of 6 rows) then MAX of a single value = that row value.
However, the data you've posted in insufficient. I don't see anything to:
associate the 6 rows per group
distinguish whether a row is "Letter" or "Name"
There is no implicit row order or number to rely upon to generate the groups
Unfortunately, the max columns in a SQL 2008 select statement is 4,096 as per MSDN Max Capacity.
Instead of using a pivot, you might consider dynamic SQL to get what you want to do.
Declare #SQLColumns nvarchar(max),#SQL nvarchar(max)
select #SQLColumns=(select '''+ColName+'''',' from TableName for XML Path(''))
set #SQLColumns=left(#SQLColumns,len(#SQLColumns)-1)
set #SQL='Select '+#SQLColumns
exec sp_ExecuteSQL #SQL,N''