I had to migrate my personal Account to a different email provider,
new Account is Organization Admin of entire GC project.
I can not see any dataprep flow from within my account but I can still access all flow from my old account, is there a quick way to grant access to other accounts and or migrate all the jobs at once?
As well mentioned by James, you need to export the flow from your old account to then import it into the new one. I recommend you to take a look on the following guides to get the step-by-step instructions to perform these actions:
Import Flow
Export Flow
Additionally, you can try to share or send a copy of the flow, which enables you to work with a completely independent version of the original.
Cloud Dataprep does not have a built-in way to migrate flows.
The next best approach may be to export the flows as Dataflow templates and then use them in the new account.
Related
We have multiple environments and it's via human manual input to insert the identity providers and clients when migrating up the environments.
Is there a way to isolate export/import of an identity provider or client? The manual input has brought in errors when migrating identity providers and clients up the environments.
Thank you.
Is there a way to isolate export/import of an identity provider or
client?
I have faced the same issue, to solved what I have done was to create a bunch of bash scripts based on the Rest Admin API. For instance:
Get the clients : GET /{realm}/clients
Create the clients : POST /{realm}/clients
First, I call the get endpoint, and export its response (i.e., the clients) into a .json that I later use as the payload for the post endpoint.
And the same logic applies to the identity providers. It is a bit cumbersome in the beginning to create those scripts, some of them I have already upload to the my repo (I plan to upload a bunch more of them), but after they are working the process gets smoother.
You can apply the same aforementioned logic, but instead of using bash scripts use the Keycloak Java API. The other option is to use Keycloak export Realm feature; export the realm, extract from the .json file all the content that you do not need, and use the remaining content afterwards with the import Realm feature.
I've been trying to implement a way to download all the changes made by a particular user in salesforce using PowerShell script & create a package The changes could be anything whether it can be added or modified, Apex classes, profiles, Account, etc based on the modified by the user, component ID, timestamp, etc. below is the URL that exposes the API. The URL Does not explain any way to do this by using a script.
https://developer.salesforce.com/docs/atlas.en-us.api_meta.meta/api_meta/meta_listmetadata.htm
Does anyone know how I can implement this?
Regards,
Kramer
Salesforce orgs other than scratch orgs do not currently provide source tracking, which makes it possible to pinpoint user changes in metadata and extract only those changes. This is done by an SFDX/Metadata API client, like Salesforce DX or CumulusCI (disclaimer: I'm on the CumulusCI team).
I would not try to implement a Metadata API client in PowerShell; instead, harness one of the existing tools to do so.
Salesforce orgs other than scratch orgs don't provide source tracking at present. To identify user changes, you can either
Attempt to extract all metadata and diff it against your version control, which is considerably harder than it sounds and is implemented by a variety of commercial DevOps tools for Salesforce (GearSet, Copado, etc).
Have the user manually add components to a Change Set or Unmanaged Package, and use a Metadata API client as above to retrieve the contents of that package. (Little-known fact, a Change Set can be retrieved as a package!)
To emphasize: DevOps on Salesforce does not work like other platforms. Working on the Metadata API requires a fair amount of time investment and specialization. Harness the existing work of the Salesforce community where you can, but be aware that the task you are laying out may be rather more involved than you think and it's not necessarily something you can just throw together from off-the-shelf components.
Ideally, I would like to write a function to start a dataprep job on one of the following events
kafka message
file added or change to GCS.
I'm thinking I could write the triggers in python if there is a support library. But I can't find one. Happy to use a different language if I don't have python available.
Thanks
Yes there is a library available now that you can use.
https://cloud.google.com/dataprep/docs/html/API-Workflow---Run-Job_145281449
This explains about Dataprep API and how we can run and schedule the jobs.
If you are able to do it using python and this API. Please post example here as well.
The API documentation for the Trifacta related product is available at https://api.trifacta.com.
Note that to use the Google Dataprep API, you will need to obtain an access token (see https://cloud.google.com/dataprep/docs/html/Manage-API-Access-Tokens_145281444).
You must be a project owner to create access tokens and the Dataprep API for that project. Once that's done, you can create access tokens using the Access tokens page, under the user preferences.
According to the Microsoft documentation, you need to have Basic access in VSTS in order to create Test Plans, however, when I log in with a user having Basic access, the link for adding a Test Plan is not there.
What additional access does this user need to be able to create Test Plans? The user is also an administrator of the team to which Test Plans need to be added.
Screenshots showing the MS documentation, the particular user's access level, how it should look according to MS and how it does look (with the "+" icon to add Test Plans not appearing when logging in as the user in question with Basic access).
That article introduce the testing permission and access, not for the way (UI) to manage test (e.g. Create test plan)
With basic access level and Contributors permission, you can create test plan. There are many ways to create test plan, such as Microsoft Test Manager (client software), REST API. But you can’t do it in Test tab without Test Manager extension, which is used to manage test online (test tab).
To conclude, if you want to create test plan on online (test tab), you need to install Test manager extension. You can build a custom extension to manage test through calling REST API in order to manage test online without install Test manager extension.
The Test Manager module in VSTS requires additional license.
It costs $52 per month per user.
Also Visual Studio Enterprise subscription includes this license.
So, as you are using Basic license of VSTS the Test Manager module is not fully available, even with Administrator role.
I think it should be explained at documentation.
I found this link Test Plans, I hope can be useful for anyone.
Once I clicked on Azure DevOps Portal at Test Plans, it shows this label.
Upgrade to Test Manager extension to get full test management
capabilities Use the paid Test Manager extension to get access to
advanced test management capabilities like assign configurations,
assign testers, centralized parameters, authoring tests in grid view,
exporting test results etc. in your account. Learn more.
Test plans view
Complementing with Create a Test Plan.
While working on a single Azure Data Factory solution with no Source Control. Is it possible to work parallelly for a team of 3 or more developers, without corrupting the main JSON?
Scenario:
All developers are accessing the same ADF and working on different pipelines at the same time. One of the developer publishes his/her updates, does it somehow overwrites or ignores the changes other developers are publishing?
I tested and found that:
Multiple users can access the same Data factory and working with
different pipelines in same time.
Publish only affect the current user and the current pipeline which
user is developing and editing. It won't overwrites other pipelines.
For you question:
Is it possible to work parallelly for a team of 3 or more developers, without corrupting the main JSON?
Yes, it's possible.
One of the developer publishes his/her updates, does it somehow overwrites or ignores the changes other developers are publishing?
No, it doesn't. For example, user A only develop with pipeline A, then publish again. The Publish only affect the current pipeline, won't overwrite or affection other pipelines.
You could test and prove it.
Update:
Thanks #V_Singh for share us the Microsoft suggestion:
Microsoft suggested to use CI/CD only, otherwise there will be some disparity in code.
Reply from Microsoft:
"In Live Mode can hit unexpected errors if you try to publish because you may have not the latest version ( For Example user A publish, user B is using old version and depends on an old resource and try to publish) not possible. Suggested to please use Git, since it is intended for collaborative scenarios."
Hope this helps.