Can I export the list of projects from the Iguazio cluster, how? - mlops

I would like to export all my projects from an Iguazio cluster, how will I be able to do it?

It's actually pretty straightforward.
Just run the following code snippet in your jupyter notebook:
import mlrun
db = mlrun.get_run_db()
list = db.list_projects()
for i in list:
print(i.to_yaml())
You can get the yaml of all the projects. If you need specific information about a project, you can parse the info from the yaml.

Related

Importing a module in Databricks

I'm not sure how importing module in python really works in Azure Databricks. First, I was able to make it work using the following:
import sys
sys.path.append("/Workspace/Repos/Github Repo/sparkling-to-databricks/src")
from utils.some_util import *
I was able to use the imported function. But then I restarted the cluster and this would not work even though the path is in sys.path.
I also tried the following:
spark.sparkContext.addPyFile("/Workspace/Repos/Github Repo/my-repo/src/utils/some_util.py")
This did not work either. Can someone please tell me what I'm doing wrong here and suggest a solution. Thanks.

Why my Azure functions are not visible in the function list after my deployment from GitHub/Kudu?

I'm trying to deploy my first HTTP trigger Function App to Azure.
It was created with the Azure Function extension in VS Code with TypeScript template.
I use my GitRepo as a source and the Kudu build App Service.
My functions are working well locally. I can see them in VS Code > Azure tab, Local Project > Functions.
I have no error on the deployment itself but I cannot see my two functions in the Azure Functions list.
In the kudu UI, I see that all my files are correctly deployed :
Kudu screenshoot
My settings are :
settings
Where can I find some logs on what went wrong? Any idea of other things to check?
Any help will be appreciated.
I could use Zip deploy (https://learn.microsoft.com/en-us/azure/azure-functions/deployment-zip-push) as an alternative way to deploy this (I haven't tried it yet). I would like to know what's wrong with my current setup.
Not sure if the problem is that you configure the wrong runtime.
Here is the steps I did:
Create a Function app project with an http-trigger function based on TypeScript in VS Code:
Upload the project to GitHub.
Using deployment center to configure deploy from Git on portal.
After deploy, check in Functions page:
By the way, you could deploy from VS Code directly:

Is it possible to auto select different "gcloud" configs for different projects in multiple workspaces / folders ? Gcloud on multiple projects

From this question and this article we get that is possible to create multiple configs for the gcloud SDK.
But it seems that you have to manually switch between then, by running:
gcloud config configurations activate <CONFIG_NAME>
But is there a way for each config to be automatically selected whenever I open up a project workspace/folder on VSCode? How can I do this?
I've just tested activating a new config on a different VSCode project. That seems to update it globally. Now, all of my VSCode windows (different projects) are seeing the same activated config.
Isn't it dangerous? I mean, I could be uploading stuff to the cloud on a different project that I'm not aware of. How do people usually handle this? Do I need to run the activate command on every script before deploying something?
Unfortunately, I am not aware of such a possibility, however I have found something interesting that may help you. There is following extension:
GCP Project Switcher
The extension only allows you to change projects, however as I looked into the code it is running gcloud set config project command under the hood. You could raise a request to add the possibility to change the whole configuration to the instead of project only, as it is a very similar approach.

Is it possible to see the script generated from a pgAdmin import job?

I was having trouble importing a csv, so I created a table without loading and went through the pgAdmin 4 importer - which worked! But now I want to see what it did differently to make it work. Is there a way to see the script behind the import job?
Thanks!
Check the pgAdmin4.log, You might get the complete command executed by pgAdmin4 in the log file itself.

jenkins parameterized build with nexus artifacts

i am working on Jenkins to create a continuous integration. i want to create a job with parameters which will have a drop down list of artifacts that are stored in nexus and a drop down list of environment that we want those artifacts to be deployed to (web sphere). i am new to Jenkins and would like to get any help that will help me start the job.
You will need Extended Choice Parameter plugin to achieve your goal.
You will have to store the list of artifacts fetched from Nexus in a file. Same goes for the list of environments. These files will then be picked up by the above plugin using the method described in this link. Property File option is what you need to use in the given plugin.
You can also use Dynamic choice parameter,
Then write a groovy script that will take the meatdata out from Nexus,
For me it's working just great.
import groovy.xml.*;
import groovy.util.*;
myUrl = "http://NexusServer.fo.net:8081/nexus/service/local/repositories/repo-name/content/groupID/maven-metadata.xml"
def data = new URL(myUrl).getText()
def dataObj = new XmlParser().parseText(data)
def versions=[]
for (v in dataObj.versioning[0].versions[0].version){
versions.add(v.value()[0])
}
versions.sort(false).reverse()
This is how it looks at the end: