I'm connecting to a REST API to bring several tables into a Power BI file. I can connect to the API and retrieve the data without any issues using 'Get Data > Other > Web' from the main toolbar, and then entering a URL in the format:
https://api01.naturalhr.net/2.0/timeoff/key/(security key here)/format/xml
The data usually comes back quite quickly - within about 10-20 seconds.
My issue is that when I try to refresh the same data it usually times out after 5-ish minutes. To refresh I go to 'Transform Data (I think this was 'Edit Queries' in earlier versions) > Select the query I'm interested in (in this case 'timeoff') > Select the 'Refresh Preview' button on the main menu.
The source in the formula bar in the Power Query editor is again just:
= Xml.Tables(Web.Contents("https://api01.naturalhr.net/2.0/timeoff/key/(security key here)/format/xml"))
So I'm just trying to refresh the same URL with which I'd retrieved the data without any issues, but for some reason it is at best taking much longer, and more commonly just timing out altogether.
Note that I do have some transformations to the original data, but even when I remove all of these I am still seeing the time-out.
Can anyone explain why I can get, but not refresh, the same data? Many thanks.
###EDIT:
To add some further information to this, I've used the new-ish Power BI diagnostic tools to try to troubleshoot this. What I've noticed is that while the Resource column displays the original URL, the Data Source Query column appends the text 'HTTP/1.1' to the original URL. Please see the screenshot below. If I try to establish a new connection with the added text, the query times out. Can anyone tell me why the extra text is added, why this prevents the data being returned, and how I can work around this? Thanks
Power BI Diagnostics Output
Try this way in a blank query:
let
GetData =
let
source = Web.Contents("https://api01.naturalhr.net/2.0/timeoff/key/(security key here)/format/xml"),
xml = Xml.Document(source)
in
xml
in
GetData
Use Fiddler like #Rick Grimes said to see if your request is being sent normally.
Related
I'm new to grafana, so I might be missing something obvious. But, I have a custom cloudwatch metric that records http response codes into buckets (e.g. 2xx, 3xx, etc.).
My grafana visualization is using a query to pull and group data from cloudwatch and the resulting fields are dynamic: 2xx (us-east-1), 2xx (us-west-1), 3xx (us-east-1), etc.
I then use transformations to aggregate those values for a global view of the data:
The problem is, I can't create the transformation until the data exists. I'd like to have a 5xx field, but since that data is sporadic, it doesn't show up in the UI and I can't find a way to force "5xx (...)" to exist and have it get used when/if those response codes start occurring.
Is there a way to create placeholder fields somehow to achieve this?
You can't create it in the UI. But you have still option to edit that in the panel model directly. It is JSON, which represent whole panel. Edit it manually - in the panel menu click Inspect > Panel JSON and create&customize another item in the transformation section. It is not very convenient option to edit panel, but you will achieve your target.
I'm experiencing a very weird issue with "data has been changed" errors.
I use ms access as a frontend and postgresql as backend. The backend used to be in ms access and there were no issues, then it was moved to sql server and there were no issues there either. The problem started when I moved to postgresql.
I have a table called Orders and a table called Job. Each order has multiple jobs, I have 2 forms, one parent form for the Order and one Subform for the Jobs (continuous form). I put the subform in a separate tab, first tab contains general order information and the second tab has the Job information. Job is connected Orders using a foreign key called OrderID, Id of Orders is equal to OrderID in Job.
Here is my problem:
I enter some information in the first tab, customer name, dates etc, then move to the second tab, do nothing in the second tab, go back to the first one and change a date. I get "The data has been changed" error
I'm confused as to why this is happening. Now why I call this weird?
First, if I put the subform on the first tab, I can change all fields of Orders just fine. IT's only if I put it on the second tab and, add some info, change tab, then go back and change an already existing value that I get the error
Second, if I make the subform on the second tab Unbound (so no ID - OrderID) connection, I get the SAME error
Third, the "usual" id for "The data has been changed" error is Runtime Error 440. But what I get is Runtime Error: "-2147352567 (80020009)". Searching online for this error didn't help because it can mean a lot of different things, including "The value you entered isn't valid for this field" like here:
Access Run time error - '-2147352567 (80020009)': subform
or many different results for code 80020009 but none for "the data has been changed"
MS access 2016, postgresql 12.4.1
I'm guessing you are using ODBC to connect Access to Postgresql. If so do you have timestamp fields in the data you working with? I have seen the above as the Postgres timestamp can have a higher precision then Access. This means when you go to UPDATE Access uses a truncated version of the timestamp and can't find the record and you get the error. For this and other possible causes see:
https://odbc.postgresql.org/faq.html#6.4
Microsoft Applications
I work on application for fetching and downloading SharePoint data. For every folder in SharePoint I can get the list of all files inside given folder by using next SharePoint REST API endpoint:
/_api/web/GetFolderById('<folder_guid>')/Files
The expected size and guid is provided for every file so I can use them when I want to download the file. Then I use the next endpoint from SharePoint REST API in order to actually get file content:
/_api/web/GetFileById('<file_guid>')/$value
From time to time when I download the file I get less data than expected: size of downloaded data is just different from the value I obtain while getting the properties list of files. However when I try to get its content again it can be successfully downloaded (size of downloaded data is equal the expected value) or I can get another incomplete data.
I verified that the first endpoint (one used to get properties of all files in the folder) returns the correct file size. The problem is in the call of the second one.
I see that there is "transfer-encoding" header with "chunked" value in response. So when my http client performs chunked data download and if zero chunk is received at some point then we reached the end of the body by definition. So it looks like in some cases SharePoint either returns the incomplete data or zero chunks when they should not be sent.
What can be the reason of such strange behavior? Is it a know issue?
We actually also see this, strange behaviour, many files are just small aspx files, about 3-4kb and they are constantly smaller by 15% and more than appears in file propertis. We're also using REST API and this is really frustrating. All those strange bugs in Sharepoint Online are very annoying.
this is an interesting topic... are those files large? like over 1GB? It would seem that chunk file download is not supported way in SP Online. Better option is to user RPC. Please see this links for examples:
https://sharepoint.stackexchange.com/questions/184789/download-large-files-from-sharepoint-online
https://social.msdn.microsoft.com/Forums/office/en-US/03e55d41-1daf-46a5-b61d-2d80139123f4/download-large-files-using-rest?forum=sharepointdevelopment
https://piyushksingh.com/2016/08/15/download-large-files-from-sharepoint-online/
You could also check the MS Graph API if maybe will work better for this case
https://learn.microsoft.com/en-us/graph/api/driveitem-get-content?view=graph-rest-1.0&tabs=http
... I hope this will be of any help
Whenever I try to apply filter to an attribute, which has ValueSelection= Dropdown, the dropdown is not populated and error message "The requested list could not be retrieved because the query is not valid or a connection could not be made to the data source" is shown instead.
If I set up ValueSelection=List I am getting a different error message:
An attempt has been made to use a semantic query extension associated with the data extension 'SQL' that is not registered for this report server.
(Microsoft.ReportingServices.SemanticQueryEngine)
This happens within BIDS environment and was observed both in SQL 2005 and SQL 2008.
I've already studied articles, which discussed the similiar problem, but neither of them applied to my case. The user account in data source has all necessary rights, data could be retrieved without any problem (for example if i try "Explore data" in data source view). The SQL profiler shows that no query is being sent to SQL Server when there is an attempt to populate dropdown. So nothing is wrong with the query, it is simply never executed.
Your connection is not working. Try to test you connection by trying a simple table and query output.
This will enable you to test the connection before trying anything advanced.
Got this problem and in my case it was caused by wrong connection string in Data Source - instead of just having a SQL Server name like "SOMESQLSERVER_MACHINE" I had for some reason "SOMESQLSERVER_MACHINE.our.corp.domain". It had to be the same, but then I realized that the domain is wrong, after removing it all works like a charm again. That said: it's always good idea to start with detailed checks on your basic settings.
Otherwise this could be a problem with permissions to the folders on Report Manager.
I'm implementing a RESTful API which exposes Orders as a resource and supports pagination through the resultset:
GET /orders?start=1&end=30
where the orders to paginate are sorted by ordered_at timestamp, descending. This is basically approach #1 from the SO question Pagination in a REST web application.
If the user requests the second page of orders (GET /orders?start=31&end=60), the server simply re-queries the orders table, sorts by ordered_at DESC again and returns the records in positions 31 to 60.
The problem I have is: what happens if the resultset changes (e.g. a new order is added) while the user is viewing the records? In the case of a new order being added, the user would see the old order #30 in first position on the second page of results (because the same order is now #31). Worse, in the case of a deletion, the user sees the old order #32 in first position on the second page (#31) and wouldn't see the old order #31 (now #30) at all.
I can't see a solution to this without somehow making the RESTful server stateful (urg) or building some pagination intelligence into each client... What are some established techniques for dealing with this?
For completeness: my back-end is implemented in Scala/Spray/Squeryl/Postgres; I'm building two front-end clients, one in backbone.js and the other in Python Django.
The way I'd do it, is to make the indices from old to new. So they never change. And then when querying without any start parameter, return the newest page. Also the response should contain an index indicating what elements are contained, so you can calculate the indices you need to request for the next older page. While this is not exactly what you want, it seems like the easiest and cleanest solution to me.
Initial request: GET /orders?count=30 returns:
{
"start"=1039;
"count"=30;
...//data
}
From this the consumer calculates that he wants to request:
Next requests: GET /orders?start=1009&count=30 which then returns:
{
"start"=1009;
"count"=30;
...//data
}
Instead of raw indices you could also return a link to the next page:
{
"next"="/orders?start=1009&count=30";
}
This approach breaks if items get inserted or deleted in the middle. In that case you should use some auto incrementing persistent value instead of an index.
The sad truth is that all the sites I see have pagination "broken" in that sense, so there must not be an easy way to achieve that.
A quick workaround could be reversing the ordering, so the position of the items is absolute and unchanging with new additions. From your front page you can give the latest indices to ensure consistent navigation from up there.
Pros: same url gives the same results
Cons: there's no evident way to get the latest elements... Maybe you could use negative indices and redirect the result page to the absolute indices.
With a RESTFUL API, Application state should be in the client. Here the application state should some sort of time stamp or version number telling when you started looking at the data. On the server side, you will need some form of audit trail, which is properly server data, as it does not depend on whether there have been clients and what they have done. At the very least, it should know when the data last changed. No contradiction with REST here.
You could add a version parameter to your get. When the client first requires a page, it normally does not send a version. The server replies contains one. For instance, if there are links in the reply to next/other pages, those links contains &version=... The client should send the version when requiring another page.
When the server recieves some request with a version, it should at least know whether the data have changed since the client started looking and, dependending of what sort of audit trail you have, how they have changed. If they have not, it answer normally, transmitting the same version number. If they have, it may at least tell the client. And depending how much it knows on how the data have changed, it may taylor the reply accordingly.
Just as an example, suppose you get a request with start, end, version, and that you know that since version was up to date, 3 rows coming before start have been deleted. You might send a redirect with start-3, end-3, new version.
WebSockets can do this. You can use something like pusher.com to catch realtime changes to your database and pass the changes to the client. You can then bind different pusher events to work with models and collections.
Just Going to throw it out there. Please feel free to tell me if it's completely wrong and why so.
This approach is trying to use a left_off variable to sort through without using offsets.
Consider you need to make your result Ordered by timestamp order_at DESC.
So when I ask for first result set
it's
SELECT * FROM Orders ORDER BY order_at DESC LIMIT 25;
right?
This is the case when you ask for the first page (in terms of URL probably the request that doesn't have any
yoursomething.com/orders?limit=25&left_off=$timestamp
Then When receiving your data set. just grab the timestamp of last viewed item. 2015-12-21 13:00:49
Now to Request next 25 items go to: yoursomething.com/orders?limit=25&left_off=2015-12-21 13:00:49 (to lastly viewed timestamp)
In Sql you would just make the same query and say where timestamp is equal or less than $left_off
SELECT * FROM (SELECT * FROM Orders ORDER BY order_at DESC) as a
WHERE a.order_at < '2015-12-21 13:00:49' LIMIT 25;
You should get a next 25 items from the last seen item.
For those who sees this answer. Please comment if this approach is relevant or even possible in the first place. Thank you.