I am building a Business Intelligent chat bot where data is retrieved from APIs. I am making the API calls from cloud functions filtering and returning the data.
I can make calls while I am connected to VPN but not from outside (IBM cloud functions)
import requests
import pandas as pd
def reports(api_url):
r = requests.get(url)['Body']
df = pd.read_json(r)
#filtered report
filtered_df = df[df['state']=='CA']
return filtered_df.to_json()
The above api_url is private. I get HTTP connection error when called IBM cloud functions.
Related
I am confused about pricking criteria in aws amplify
Let's assume
List<Object> obj = Amplify.DataStore.query(Object.classtype, where: Object.ID.eq('123'));
//obj.length is 1000
Is this command 1000 requests or 1 request?
There is no additional charges to use Amplify DataStore in your application, you only pay for the backend resources you use, such as AppSync and DynamoDB
Refer to DynamoDB pricing for your AWS region
I am trying to get usage data from Power BI into Power BI desktop in order to create an admin report. The report will show the usage of the different reports in Power BI.
In order to get this data I am using the Power BI REST APIs. Specifically calls such as:
GET https://api.powerbi.com/v1.0/myorg/admin/datasets -- To get datasets
GET https://api.powerbi.com/v1.0/myorg/admin/apps?$top={$top} -- To get apps
To get datasets in Power Query I can then write:
let
Source = Json.Document(
Web.Contents(
"https://api.powerbi.com/v1.0/myorg/admin/datasets", [Headers=[Authorization="Bearer MYKEY"]]))
in Source
This does retrieve the datasets. However the key used is taken from https://learn.microsoft.com/en-us/rest/api/power-bi/admin/apps-get-apps-as-admin#code-try-0
This link allows a user to try the API, which gives you a temporary key. In order to get a refreshable token/key, another call must be made to the API. In order to make this call I have created an application in Azure which has been granted rights by our admin. In order to retrieve the refreshable token, I have written this in Power Query (by using Azure HTTP POST requests for an access token from Power BI):
() =>
let
apiUrl = "https://login.windows.net/MY TENANT ID/oauth2/token",
body = [
client_id="My Client ID",
grant_type="client_credentials",
client_secret="My Client Secret",
resource="https://analysis.windows.net/powerbi/api"
],
Source = Json.Document(Web.Contents(apiUrl, [Headers = [Accept = "application/json"],
Content = Text.ToBinary(Uri.BuildQueryString(body))]))
in
Source
This call is successful and returns me the following (Photo):
The natural progression would then be to paste the generated access token into my first query, but this gives me an access error. "Expression.Error: Access to the resource is forbidden." When changing the data sources settings from anonymous to Windows I get another error message: "Expression.Error" The 'Authorization' header is only supported when connecting anonymously..."
Any ideas on what to do in order to get the data into Power BI would be greatly appreciated. Thanks.
It is probably not the answer you are looking for but my team was attempting to do the very same thing and found the only way to get this to work was to add some form of data store between PBI admin logs and the PBI dataset. We use CSV's and a sql database but an Azure Job runs on a schedule and collects the data and stores it locally, then PBI reads that stored data.
This is not a direct answer to your question, but you can build admin reports from the Power BI Service data using the Power BI REST API Connector from Github (link). Then you can connect to the service directly from PBI Desktop, without dealing yourself with OAuth and/or AAD authentication. The connector has some limitations, but it was very useful for our reporting.
I don't know if you can install this custom connector in the Power BI Service, but it works perfectly in Power BI Desktop.
I have a front on which forms for data entry, I enter data there and they go to the backend, but I need to intercept them and write them to the database. I have a script that writes data to a database, but I don't understand how to intercept the data. I am using the Flask framework. Help me please!
#app.route('/')
def main_page():
return "<html><head></head><body>A RESTful API in Flask using.</a>.</body></html>"
#app.route('/api/v1/reports/', methods='GET'):
I'm new to Azure Databricks and Scala, i'm trying to consume HTTP REST API that's returning JSON, i went around the databricks docs but i don't see any Datasource that would work with rest api.Is there any library or tutorial on how to work with rest api in databricks. If i make multiple api calls (cause of pagination) it would be nice to get it done in parallel way (spark way).
I would be glad if you guys could point me if there is a Databricks or Spark way to consume REST API as i was shocked that there's no information in docs about api datasource.
Here is A Simple Implementation.
Basic Idea is spark.read.json can read an RDD.
So, just create an RDD from GET call and then read it as regular dataframe.
%spark
def get(url: String) = scala.io.Source.fromURL(url).mkString
val myUrl = "https://<abc>/api/v1/<xyz>"
val result = get(myUrl)
val jsonResponseStrip = result.toString().stripLineEnd
val jsonRdd = sc.parallelize(jsonResponseStrip :: Nil)
val jsonDf = spark.read.json(jsonRdd)
That's it.
It sounds to me like what you want, is to import into scala a library for making HTTP requests. I suggest the HTTP instead of a higher level REST interface, because the pagination may be handled in the REST library and may or may not support parallelism.
Managing with the lower level HTTP lets you de-couple pagination. Then you can use the parallelism mechanism of your choice.
There are a number of libraries out there, but recommending a specific one is out of scope.
If you do not want to import a library, you could have your scala notebook call upon another notebook running a language which has HTTP included in the standard library. This notebook would then return the data to your scala notebook.
I have an android application, which is collecting data in form of text and images.I implemented an AWS Amplify integration. Am using auth for logins, and i also added datastore for online/offline synchronization of collected data to the cloud. But i get error 400 because my item exceeds the 400kb row limit on dynamodb. After research here , i discovered that its possible to use Amplify datastore to store complex objects like images but they are stored in s3. So the sample code that demostrates this is for react, which i have failed to implement the same in native android. So anyone have a way of implementing this in android?
Currently, Amplify only supports 'complex objects' when using the API package. This does not include the DataStore package, which handles AppSync differently.
complex object support: import { API } from '#aws-amplify/api'
no complex object support: import { DataStore } from '#aws-amplify/datastore'
Sources:
https://github.com/aws-amplify/amplify-js/issues/4579#issuecomment-566304446
https://docs.amplify.aws/lib/graphqlapi/advanced-workflows/q/platform/js#complex-objects
If you want to use DataStore, currently you need to put the file into S3 separately, and then you can store reference details to the S3 file in the DynamoDB record (i.e. bucket, region, key). This could be done with Amplify Storage module.
const { key } = await Storage.put(filename, file, { contentType: file.type } )
const result = await DataStore.save({ /* an object with s3 key/info */ })