cdktf, How to import resources from grafana (python) - import

I wish to import existing data sources from grafana.
The following link and its answer is not satisfactory, as the resources have been created using cdktf, albeit in an earlier run.
How can I import using cdktf similar to the following terraform cli command:
terraform import grafana_data_source.by_integer_id {{datasource id}}
terraform import grafana_data_source.by_uid {{datasource uid}}

You need to use the terraform import command in your synthesized stack directory. The documentation about moving a resource from one stack to another covers how the command can be constructed.

Related

Spark job fails on image pull in Iguazio

I am using code examples in the MLRun documentation for running a spark job on Iguazio platform. Docs say I can use a default spark docker image provided by the platform, but when I try to run the job the pod hangs with Error ImagePullBackOff. Here is the function spec item I am using:
my_func.spec.use_default_image = True
How do I configure Iguazio to use the default spark image that is supposed to be included in the platform?
You need to deploy the default image to the cluster docker registry. There is one image for remote spark and one image for spark operator. Those images contain all the necessary dependencies for remote Spark and Spark Operator.
See the code below.
# This section has to be invoked once per MLRun/Iguazio upgrade
from mlrun.runtimes import RemoteSparkRuntime
RemoteSparkRuntime.deploy_default_image()
from mlrun.runtimes import Spark3Runtime
Spark3Runtime.deploy_default_image()
Once these images are deployed (to the cluster docker registry), your function with function spec “use_default_image” = True will be able to pull the image and deploy.

Azure terraform pipeline

I hope somebody can help me to solve this issue and understand how to implement the best approach.
I have a production environment running tons of azure services (sql server, databases, web app etc).
all those infra has been created with terraform. For as powerful as it is, I am terrified on using it in a pipeline for 1 reason.
Some of my friend, often they do some changes to the infra manually, and having not having those changes in my terraform states, if I automate this process, it might destroy the resource ungracefully, which is something that I don't want to face.
so I was wondering if anyone can shade some light on the following question:
is it possible to have terraform automated to check the infra state at every push to GitHub, and to quit if the output of the plan reports any change?
change to make clear my example.
Lets say I have a terraform state on which I have 2 web app, and somebody manually created a 3 web app on that resource group, it develops some code and push it to GitHub.My pipeline triggers, and as first step I have terraform that runs a terraform plan and/or terraform apply, if this command reports any change, I want it to quit the pipeline(fail) so I will know there is something new there, and if the terraform plan and/or terraform apply return there are no changes to the infra, is up to date to continue with the code deployment.
thank you in advance for any help and clarification.
Yes, you can just run
terraform plan -detailed-exitcode
If the exit code is != 0, you know there are changes. See here for details.
Let me point out that I would highly advise you to lock down your prod environment so that nobody can do manual changes! Your CI/CD pipeline should be the only way to make changes there.
Adding to the above answer, you can also make use of terraform import command just to import the remote changes to your state file. The terraform import command is used to import existing resources into Terraform. Later run plan to check if the changes are in sync.
Refer: https://www.terraform.io/docs/cli/commands/import.html

How to run command on startup?

How does one instruct Pulumi to execute one or more commands on a remote host?
The equivalent Terraform command is remote-exec.
Pulumi currently doesn't support remote-exec-like provisioners but they are on the roadmap (see https://github.com/pulumi/pulumi/issues/1691).
For now, I'd recommend using the cloud-init userdata functionality of the various providers as in this AWS EC2 example.
Pulumi supports this with the Command package as of 2021-12-31: https://github.com/pulumi/pulumi/issues/99#issuecomment-1003445058

Integrate Tensorboard in KUBEFLOW pipeline using viewers

I'm using KUBEFLOW pipelines for training KERAS models with TF and I'm starting from a very simple one.
Model is training fine and the pipeline works properly, but I'm not able to use the output viewer for TENSORBOARD properly.
Reading from the documentation it seems that by just adding a proper json file in the root path of the training container (/mlpipeline-ui-metadata.json) should be enough but even when I do so, nothing appears in the artifact section of my experiment run (while KERAS logs can be seen correctly).
Here's how I configured it:
mlpipeline-ui-metadata.json (added from the DOCKERFILE directly)
{
"version": 1,
"outputs": [
{
"type": "tensorboard",
"source": "/tf-logs" #Just a placeholder at the moment
}
]
}
pipeline
import kfp
from kfp import dsl
from kubernetes.client.models import V1EnvVar
def train_op(epochs,batch_size,dropout,first_layer_size,second_layer_size):
dsl.ContainerOp(
image='MY-IMAGE',
name='my-train',
container_kwargs={"image_pull_policy": "Always", 'env': [
V1EnvVar('TRAIN_EPOCHS', epochs),
V1EnvVar('TRAIN_BATCH_SIZE', batch_size),
V1EnvVar('TRAIN_DROPOUT', dropout),
V1EnvVar('TRAIN_FIRST_LAYER_SIZE', first_layer_size),
V1EnvVar('TRAIN_SECOND_LAYER_SIZE', second_layer_size),
]},
command=['sh', '-c', '/src/init_script.sh'],
).set_memory_request('2G').set_cpu_request('2')
#dsl.pipeline(
name='My model pipeline',
description='Pipeline for model training'
)
def my_model_pipeline(epochs,batch_size,dropout,first_layer_size,second_layer_size):
train_task = train_op(epochs,batch_size,dropout,first_layer_size,second_layer_size)
if __name__ == '__main__':
kfp.compiler.Compiler().compile(my_model_pipeline, 'my_model.zip')
I've already tried to access to the running POD (kubectl exec ..) and I verified that the file is actually in the right spot.
By the way I'm using KUBEFLOW v0.5
TL;DR: The source section should point to a location on a shared storage, not the pod's local file system path
The source section in mlpipeline-ui-metadata.json should point to a location where the pipelines-ui pod can later reference it, i.e. it should be on a shared storage, s3 (if on AWS), mounted Kubernetes volume (if on-prem).
The way Kubeflow works is, at the end of the run it just zips mlpipeline-ui-metadata.json and stores it in a minio storage. When you click on the Artifacts section, the UI looks for this source section in the zipped json and tries to read the tf events files. If the tf events file are not moved to shared storage from the pod they won't be read since it's on the ephemeral pods file system alone.

How to convert existing AWS environment into infra as code?

When we were building our AWS account, we did not think about using cloud formation or terraform. Now we have our environmemt all setup but don't want to tear down everything and build using cloud formation or terraform. So is there a way we can get our infrastructure to be imported and managed through one of them?
Thanks,
Terraform supports import, but that only supports the present state into state file. You still need to write the code. Cloudformation does not support import.
Something like https://github.com/dtan4/terraforming can be of help but YMMV.
A pretty complete answer could be found at AWS Export configuration as cloudformation template, which also covers Terraform for this purpose.
TL;DR
AWS Import/Export configuration as code (CloudFormationn | Terraform).
Based on our Infrastructure as Code (IaC) experience we found several ways to translate existing manually deployed (from Web Console UI) AWS infra to Cloudformation (CF) and / or Terraform (TF) code. Posible solutions are listed below:
AWS Cloudformation Templates
CF-1 | AWS CloudFormation native import feature
CF-2 | aws cli & manually translate to CF
CF-3 | Former2
CF-4 | AWS CloudFormer
Terraform Code / Modules
TF-1 | Terraforming
TF-2 | CloudCraft + Modules.tf
Related Article: https://medium.com/#exequiel.barrirero/aws-export-configuration-as-code-cloudformation-terraform-b1bca8949bca
As per October 2019, AWS supports importing legacy resources into CloudFormation. See https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import.html for examples.