I am currently using DataBricks and trying to set up a PySpark connect without using an Azure AD application.
I was looking at the provided examples in PySpark here, but they all require an AAD_CLIENT_ID. I was wondering whether or not there is a way not to use any application?
Thanks in advance!
It is also possible to give the connector an AAD token or using device authentication as explained here https://github.com/Azure/azure-kusto-spark/blob/master/docs/Authentication.md
Related
is there a way to connect to Azure Data factory to Salesforce Commerce Cloud ?
In Data Factory the I only see connectors to Salesforce Service & Marketing Cloud
if it's possible I'll appreciate it if someone could show me an example
Thank you !
Actually, from the Azure Data Factory connector overview, we can know that Salesforce Commerce Cloud is not supported.
The only way is that you must achieve that in code level. Then call the Function, Python or Notebook active to run it in Data Factory.
There isn't an exist code example we can provide for you. You need design it by yourself.
we may infer from the overview of the Azure Data Factory connector (https://docs.microsoft.com/en-us/azure/data-factory/connector-overview) that Salesforce Commerce Cloud is not supported. You can only accomplish that at the code level. In order to run the Function, Python, or Notebook in Data Factory, use the appropriate function. We are unable to give you an existing code example. You must create it on your own.
Hi I have requirement of connecting MarkLogic with pySpark. Is there any reference you can guide me through where I can start with. I found few blogs where they have suggested using "MarkLogic Connector for Hadoop", but since it will deprecated starting with MarkLogic release 10.0-3 so I am looking for other alternative.
You can use MarkLogic Extended REST API (using JavaScript or XQuery) - Extending the REST API.
Also you have the option to evaluate an Ad-Hoc JavaScript/Xquery Query - Evaluating an Ad-Hoc Query.
Hope that helps.
I was able to achieve this using MarkLogic rest API. Thanks for help #Ashish.
I even wrote an story on medium how we can achieve this.
https://medium.com/#anshumankaku/ingesting-data-from-cdp-hive-into-marklogic-43e4768be271
I have a web application that has to be linked with a graph database (Neo4J). Is it possible to read or write data to Neo4J using Appery?
I have chosen Appery because I am a beginner when it comes to databases and Appery seems to be easy in using REST API, as well as there is a free trial.
Feedback would be highly helpful. Thanks in advance.
Edit: I am aware that Neo4J uses Cypher queries. I would like to know if Appery supports Cypher as well.
Side note: The reason I am asking the question here without trying it out is because I dont have an active DB and my application is private due to my company's security policy
You can do that as long as Neo4J database has a REST API. If it does, then you can make calls to it from an Appery app (from Server Code or API Express). Hope this helps.
Ideally, I would like to write a function to start a dataprep job on one of the following events
kafka message
file added or change to GCS.
I'm thinking I could write the triggers in python if there is a support library. But I can't find one. Happy to use a different language if I don't have python available.
Thanks
Yes there is a library available now that you can use.
https://cloud.google.com/dataprep/docs/html/API-Workflow---Run-Job_145281449
This explains about Dataprep API and how we can run and schedule the jobs.
If you are able to do it using python and this API. Please post example here as well.
The API documentation for the Trifacta related product is available at https://api.trifacta.com.
Note that to use the Google Dataprep API, you will need to obtain an access token (see https://cloud.google.com/dataprep/docs/html/Manage-API-Access-Tokens_145281444).
You must be a project owner to create access tokens and the Dataprep API for that project. Once that's done, you can create access tokens using the Access tokens page, under the user preferences.
I'm a little bit confused here. In Google Cloud SQL FAQ page stated that UDF doesn't supported. However, I'm able to import existing functions or create a new function in the database. So, just for confirmation, did Google Cloud SQL support UDF creation now ? We need to verify this because we plan to move existing database to Google Cloud SQL which using a lot of UDF. I'm setup MySQL database version 5.6 (preview) in Cloud SQL
Thank you.
actually, as documented, UDF are not supported and there is no estimate date for their official support.
You may want to subscribe to google-cloud-sql-announce group to be informed about new feature release.
Regards
Paolo