I use MS SQL Server 2008 R2.
I've got the problem, please, excuse the long explanation.
We've got the SSAS cube. It is under development at this time, but it is partially working and can be accessed through excel.
There are projects: hierarchycal parent-child dimension
There are resources assigned to the project (e.g. man-hours, building materials, technic): dimension with resource types, fact M2M table ProjectId-ResourceId-UnitsCount-Cost
There are milestones for the projects: dimension with milestone types (few are defined), M2M fact table: ProjectId-MilestoneId-...milestone dates: planned/actual start/finish
This is a simplified schema.
I need to create a MS Reporting Services report with the following columns:
Project Hierarchy
several columnts with the pre-defined and "hardcoded" resource type amount. e.g the business wants to see the columnt with man-hours spent, and concrete consumption in cub-meters. Thse two clauses can be hardcoded in the query.
several columns with the pre-defined and "hardcoded" milestone type dates
this is a simplified schema too, more columns with other dimension slices are needed...
The problem is that i cannot find an elegant way to create this report.
in my current version, i have to create 2 datasets and query the resouce and milestone data in separate mdx queries.
then i need to use RS-lookup function to join the data in report outcome.
Please acvise:
is there a possibility to query this data in an single mdx query. when i try something like this:
union({{[Dim Resource].[Measure].[man-hour]} + {[Dim Resource].[Measure].[cub-meter]}},
{[Dim Milestone].[Milestone Type].[ProjectStart]}) i've got "different dimensionality" error. Any workarounds?
if i need to output a formatted value like: "X 'man-hour' / Y 'cub-meter'", i have to use lookup func to get both parts of the formula - any better way?
can i query this data any other way?
Please, indicate the direction of googling
or... should i just query the data from the source tables (this is allowed by security restrictions) with SQL
thank you in advance
Perhaps create a new 'virtual cube' to contain data from both of your existing cubes, then query that one.
Related
I have an SQL query:
SELECT
FORMAT(datetime_scrapped, 'MMMM-yy') [date],
count(FORMAT(datetime_scrapped, 'MMMM-yy')) as quantity
FROM scrap_log
GROUP BY FORMAT(datetime_scrapped, 'MMMM-yy')
It basically summarises all the entries in the scrap_log table by month/year and counts how many entries are in each month/year. Returns two columns (date and quantity). But I need to execute this in an ASP.NET core API using Entity Framework core. I tried using .fromSqlRaw(), but this expects all columns to be returned and so doesn't work.
I can find plenty of info on EF to implement group by and count etc... But I cannot find anything for the FORMAT(datetime, "MMMM-yy") part. Please could somebody explain to me how to do this?
EDIT: Seems already I appear to be going about this the wrong way in terms of efficiency. I will look into alternative solutions based on comments already made. Thanks for the fast response.
I have a simple query which make a GROUP BY using two fields:
#facturas =
SELECT a.CodFactura,
Convert.ToInt32(a.Fecha.ToString("yyyyMMdd")) AS DateKey,
SUM(a.Consumo) AS Consumo
FROM #table_facturas AS a
GROUP BY a.CodFactura, a.DateKey;
#table_facturas has 4100 rows but query takes several minutes to finish. Seeing the graph explorer I see it uses 2500 vertices because I'm having 2500 CodFactura+DateKey unique rows. I don't know if it normal ADAL behaviour. Is there any way to reduce the vertices number and execute this query faster?
First: I am not sure your query actually will compile. You would need the Convert expression in your GROUP BY or do it in a previous SELECT statement.
Secondly: In order to answer your question, we would need to know how the full query is defined. Where does #table_facturas come from? How was it produced?
Without this information, I can only give some wild speculative guesses:
If #table_facturas is coming from an actual U-SQL Table, your table is over partitioned/fragmented. This could be because:
you inserted a lot of data originally with a distribution on the grouping columns and you either have a predicate that reduces the number of rows per partition and/or you do not have uptodate statistics (run CREATE STATISTICS on the columns).
you did a lot of INSERT statements, each inserting a small number of rows into the table, thus creating a big number of individual files. This will "scale-out" the processing as well. Use ALTER TABLE REBUILD to recompact.
If it is coming from a fileset, you may have too many small files in the input. See if you can merge them into less, larger files.
You can also try to hint a small number of rows in your query that creates #table_facturas if the above does not help by adding OPTION(ROWCOUNT=4000).
I am getting my data from denormalized table, where I keep names and actions (apart from other things). I want to create a calculated field that will return sum of workgroup names but only when there are more than five actions present in DB for given workgroup.
Here's how I have done it when I wanted to check if certain action has been registered for workgroup:
WINDOW_SUM(COUNTD(IF [action] = "ADD" THEN [workgroup_name] END))
When I try to do similar thing with count, I am getting "Cannot mix aggregate and non-aggregate arguments":
WINDOW_SUM(COUNTD(IF COUNT([Number of Records]) > 5 THEN [workgroup_name] END))
I know that there's problem with the IF clause, but don't know how to fix it.
How to change the IF to be valid? Maybe there's an easier way to do it, that I am missing?
EDIT:
(after Inox's response)
I know that my problem is mixing aggregate with non-aggregate fields. I can't use filter to do it, because I want to use it later as a part of more complicated view - filtering would destroy the whole idea.
No, the problem is to mix aggregated arguments (e.g., sum, count) with non aggregate ones (e.g., any field directly). And that's what you're doing mixing COUNT([Number of Records]) with [workgroup_name]
If your goal is to know how many workgroup_name (unique) has more than 5 records (seems like that by the idea of your code), I think it's easier to filter then count.
So first you drag workgroup_name to Filter, go to tab conditions, select By field, Number of Records, Count, >, 5
This way you'll filter only the workgroup_name that has more than 5 records.
Now you can go with a simple COUNTD(workgroup_name)
EDIT: After clarification
Okay, than you need to add a marker that is fixed in your database. So table calculations won't help you.
By definition table calculation depends on the fields that are on the worksheet (and how you decide to use those fields to partition or address), and it's only calculated AFTER being called in a sheet. That way, each time you call the function it will recalculate, and for some analysis you may want to do, the fields you need to make the table calculation correct won't be there.
Same thing applies to aggregations (counts, sums,...), the aggregation depends, well, on the level of aggregation you have.
In this case it's better that you manipulate your data prior to connecting it to Tableau. I don't see a direct way (a single calculated field that would solve your problem). What can be done is to generate a db from Tableau (with the aggregation of number of records for each workgroup_name) then export it to csv or mdb and then reconnect it to Tableau. But if you can manipulate your database outside Tableau, it's usually a better solution
We are looking into a more powerful way of collecting and processing data to be processed in our reports. For one advanced report on a big database, we need to run two indepedent SQL queries (on the same data source) and combine them afterwards.
Query1 returns:
user id#1 ... 3 columns
user id#2 ... 3 columns
user id#4 ... 3 columns
Query 2 returns:
user id#1 ... 5 columns
user id#3 .. 5 columns
user id#4 ... 5 columns
What we want to show:
user id#1 ... 3 columns + 5 columns
user id#2 ... 3 columns
user id#3 ... 5 columns
user id#4 ... 3 columns + 5 columns
Although it's counter-intuitive, we found that combining the results from both queries in SQL leads to considerably worse runtime of the SQL query.
We have looked at subdatasets, but from my understanding it's not possible to mix the data from two subdatasets (or the main data+one subdataset) in a single table.
We have looked at subreports, but from my understanding a subreport will call the query once for each row in the report, if I put the subreport in the Details area as we intend to. But for performance reasons we want to run the two queries that we prepared, and each only once.
We think the most reasonnable approach is for us to write such advanced reports in Java, and it's possible, however the JavaBean data source cannot access the report parameters. Our database is huge and therefore we can't just make queries without where and filter afterwards, the Java code needs access to the report parameters.
We are currently looking into implementing JRQueryExecutor as recommended there and there (last comment), or even taking advantage of scriptlets.
But it sounds really quite advanced and we are wondering are we thinking the wrong way or heading in the wrong direction? And if JRQueryExecutor is the correct way any example or documentation would be welcome.
We are also considering trying to refactor our SQL to achieve the result with only one query, but we do feel that the reporting system ought to allow us to manipulate the data also in Java.
In the end we made it with a scriptlet. In afterReportInit, inheriting JRDefaultScriptlet you get the parameters and the data source from parametersMap, and you can then fill in the data source from Java.
I am currently experimenting with Tableau Extract API to generate some TDE from the tables I have in a PostgreSQL database. I was able to write a code to generate the TDE from single table, but I would like to do this for multiple joined tables. To be more specific, if I have two tables that are inner joined by some field, how would I generate the TDE for this?
I can see that if I am working with small number of tables, I could use a SQL query with JOIN clauses to create a one gigantic table, and generate the TDE from that table.
>> SELECT * FROM table_1 INNER JOIN table_2
INTO new_table_1
ON table_1.id_1 = table_2.id_2;
>> SELECT * FROM new_table_1 INNER JOIN TABLE_3
INTO new_table_2
ON new_table_1.id_1 = table_3.id_3
and then generate the TDE from new_table_2.
However, I have some tables that have over 40 different fields, so this could get messy.
Is this even a possibility with current version of the API?
You can read from as many tables or other sources as you want. Or use complex query with lots of joins, or create a view and read from that. Usually, creating a view is helpful when you have a complex query joining many tables.
The data extract API is totally agnostic about how or where you get the data to feed it -- the whole point is to allow you to grab data from unusual sources that don't have pre-built drivers for Tableau.
Since Tableau has a Postgres driver and can read from it directly, you don't need to write a program with the data extract API at all. You can define your extract with Tableau Desktop. If you need to schedule automated refreshes of the extract, you can use Tableau Server or its tabcmd command.
Many thanks for your replies. I am aware that I could use Tableau Desktop to define my extract. In fact, I have done this many times before. I am just trying to create the extracts using the API, because I need to create some calculated fields, which is near impossible to create using the Tableau Desktop.
At this point, I am hesitant to use JOINs in the SQL query because the resulting table would look too complicated to comprehend (some of these tables also have same field names).
When you say that I could read from multiple tables or sources, does that mean with the Tableau Extract API? At this point, I cannot find anywhere in this API that accommodates multiple sources. For example, I know that when I use multiple tables in the Tableau Desktop, there are icons on the left hand side that tells me that the extract is composed of multiple tables. This just doesn't seem to be happening with the API, which leaves me stranded. Anyways, thank you again for your replies.
Going back to the topic, this is something that I tried few days ago on my python code
try:
tdefile= tde.Extract("extract.tde")
except:
os.remove("extract.tde")
tdefile = tde.Extract("extract.tde")
tableDef = tde.TableDefinition()
# Read each column in table and set the column data types using tableDef.addColumn
# Some code goes here...
for eachTable in tableNames:
tableAdd = tdeFile.addTable(eachTable, tableDef)
# Use SQL query to retrieve bunch_of_rows from eachTable
for some_row in bunch_of_rows:
# Read each row in table, and set the values in each column position of each row
# Some code goes here...
tableAdd.insert(some_row)
some_row.close()
tdefile.close()
When I execute this code, I get the error that eachTable has to be called "Extract".
Of course, this code has its flaws, as there is no where in this code that tells how each table are being joined.
So I am little thrown off here, because it doesn't seem like I can use multiple tables unless I use JOINs to generate one table that contains everything.