We are implementing SSAS 2017 tabular mode for our reporting needs. I am trying to create partitions in one of the tables. I built a SQL query for the partitions. While the SQL query works fine on the database, it throws an error when validating using the partition manager. I tried to use the query editor but couldn't get much. I tried to check the query using advanced editor in query editor which says there are no syntax errors.
Following is the query I am passing:
SELECT
[Account - Company Code],
[Additional 1],
[Additional 2],
[Additional 3],
[Additional 4],
[Additional 5],
[Additional 6]
FROM dbo.[Table Name]
WHERE [Date - Accounting] >= '2012-01-01' AND [Date - Accounting] <= '2012-01-31'
The error I get in Query Editor is
Expression.SyntaxError: Invalid identifier.
Please help me sort the issue.
I read on MSDN that in a model of 1400 compatibility the partitions need to be created using Power Query M language but I am unable to retrieve any examples to do this.
I found the tutorial to do this. Posting it here so that anyone can refer to it in the future.
https://learn.microsoft.com/en-us/azure/analysis-services/tutorials/aas-lesson-10-create-partitions
Related
My Tableau prep was updated to 2021.4 this morning. After the update, a flow of mine started encountering an error when refresh. The error message is "System error: AqlProcessor evaluation failed: No more data to read from socket". I was able to identify the error happened on an Aggregation step, I attached the Aggregate in the screenshot. Basically I added an output step right before and after the Aggregate 7 to test the location of failure, the output before Aggregate 7 worked but the output after Aggregate 7 failed.
However, when I put mouse on Aggregate 7, it shows the sample result. I'm not sure what does the error mean. It only happened after the tableau prep upgrade.
Please update to 2022.2 , they have resolved it
https://www.tableau.com/support/releases/prep/2022.2.1
When running a flow with a Join and Union step, it would sometimes result in the following error, "AqlProcessor evaluation failed: no such column "
1344295
I have a dataset in Quicksight that is connected to a Redshift table but pulls from SPICE. The dataset is scheduled to refresh from Redshift to SPICE daily. It is a small table, I am using only a fraction of my SPICE capacity, and this method has been working fine for almost two years.
For some reason, suddenly the SPICE table it not refreshing completely & I can't figure out why.
There are 183 records in the table, but there are only 181 records in the SPICE dataset. If I query the table using SQL Workbench/J I get 183 recs but only 181 in the QS dataset.
I have tried to refresh multiple times & have also set the dataset to query directly to bypass SPICE and still cannot get those other two rows returned.
Nothing has changed in our permissions or anything about the Redshift-Quicksight IAM config.
Any ideas about what could possible be going on here?
Thanks for any help!
UPDATE: As I mentioned, if I select * from the table with SQL Workbench/J, I get the 183 rows that I expect. However, if I select * directly from the AWS query editor v2, I only get 181 rows. Can anyone explain to me what is causing this discrepancy?
SOLVED: The difference is that my processing now requires a COMMIT, where it did not require the COMMIT statement before.
I have an issue with the Data set - Execution server. I am using PostgreSQL as DB. I want to calculate the difference between the two dates column for my report. The query I have used in DB is:
Query 1:
SELECT end_date as end,
start_date as start,
processid as pidd,
AGE(end_date, start_date) as duration
from processinstancelog
Query 2:
select end_date,start_date,processid, end_date - start_date as
duration from processinstancelog
Both queries reflecting the correct expected result in Postgres DB. But when I am using the same queries in the Data set>Execution server it's not showing the "duration" column.
Question
Can anyone please advise what is issue why the data set is not showing the duration column?
Many Thanks
Both queries reflecting the correct expected result in Postgres DB. But when I am using the same queries in the Data set>Execution server it's not showing the "duration" column.
How do you use in the query in the execution server? Are you implementing advance query functionality? If yes can you please share the exact steps you are following and your advance query definition to review.
Answer, I deleted the old setup and installed the new setup for JBPM and data set start appearing
When I aggregate values in Google Data Studio with a date dimension on a PostgreSQL Connector, I see buggy behaviour. The symptom is that performing COUNT(DISTINCT) returns the same value as COUNT():
My theory is that it has something to do with the aggregation on the data occurring after the count has already happened. If I attempt the exact same aggregation on the same data in an exported CSV instead of directly from a PostgreSQL Connector Data Source, the issue does not reproduce:
My PostgreSQL Connector is connecting to Amazon Redshift (jdbc:postgresql://*******.eu-west-1.redshift.amazonaws.com) with the following custom query:
SELECT
userid,
submissionid,
date
FROM mytable
Workaround
If I stop using the default date field for the Date Dimension and aggregate my own dates directly in within the SQL query (date_byweek), the COUNT(DISTINCT) aggregation works as expected:
SELECT
userid,
submissionid,
to_char(date,'YYYY-IW') as date_byweek
FROM mytable
While this workaround solves my immediate problem, it sucks because I miss out on all the date functionality provided by Data Studio (Hierarchy Drill Down, Date Range filtering, etc.). Not to mention reducing my confidence at what else may be "buggy" within the product 😞
How to Reproduce
If you'd like to re-create the issue, using the following data as a PostgreSQL Data Source should suffice:
> SELECT * FROM mytable
userid submissionid
-------- -------------
1 1
2 2
1 3
1 4
3 5
> COUNT(DISTINCT userid) -- ERROR: Returns 5 when data source is PostgreSQL
> COUNT(DISTINCT userid) -- EXPECTED: Returns 3 when data source is CSV (exported from same PostgreSQL query above)
I'm happy to report that as of Sep 17 2020, there's a workaround.
DataStudio added the DATETIME_TRUNC function (see here https://support.google.com/datastudio/answer/9729685?), that allows you to add a custom field that truncs the original date to whatever granularity you want, without causing the distinct bug.
Attempting to set the display granularity in the report still causes the bug (i.e., you'll still set Oct 1 2020 12:00:00 instead of Oct 2020).
This can be solved by creating a SECOND custom field, which just returns the first, and then you can add IT to the report, change the display granularity, and everything will work OK.
I have the same issue with MySQL Connector. But my problem is solved, when I change date field format in DB from DATETIME (YYYY-MM-DD HH:MM:SS) to INT (Unixtimestamp). After connection this table to the Googe Datastudio I set type for this field as Date (YYYYMMDD) and all works, as expected. Hope, this may help you :)
In this Google forum there is a curious solution by Damien Choizit that involves combining your data source with itself. It works well for me.
https://support.google.com/datastudio/thread/13600719?hl=en&msgid=39060607
It says:
I figured out a solution in my case: I used a Blend Data joining twice the same data source with corresponding join key(s), then I specified a data range dimension only on the left side and selected the columns I wanted to CTD aggregate as "dimensions" (and not metric!) on the right side.
I have to perform incremental load thorough Talend, for that i have to write query in SQL Builder, but it is not returning more than 10 records.
For Oracle code i have Tried below
1. Checked SQL -Limit number of row - not working
2. ROWNUM < = 100 - not working
Could anyone suggest on this.