pulumi retrieve an object or array of configs stored - pulumi

When making a call to new pulumi.Config('someName') I would like to get an array of secrets that are under someName:aValue.
I've tried to call something like const cfg = new pulumi.Config('someName'), but after that, all the mothods under that class all call for a key (e.g. aValue), but that isn't helpful when wanting all secrets under the logical name.
pulumi.*.yaml
someName:someValue1:
secure: someSecureValue
someName:someValue2:
secure: someOtherSecureValue
somefile.ts
const cfg = new pulumi.Config('someName')
With the given code above, I'm looking for a list of all secrets under someName.

From the docs:
Configuration values are always stored as strings, but can be parsed
as richly typed values.
For richer structured data, the config.getObject method can be used to
parse JSON values.
For secret values, there are functions getSecretObject() and requireSecretObject(). For your example, you would do something like
pulumi config set --secret someName '{"someValue1": "someSecureValue", "someValue2": "someOtherSecureValue" }'
and then read it with
const config = new pulumi.Config();
const someName = config.requireSecretObject("someName");
const someValue1 = someName.someValue1;
Obviously, you could also use multiple secrets as separate keys in the config file and retrieve them one-by-one with separate requireSecretObject calls.
An array would be configured as
pulumi config set --secret someName '["someSecureValue", "someOtherSecureValue"]'

Related

How can I dump changes corresponding to list using ruamel.yaml?

I am using solution in the related answer for How to auto-dump modified values in nested dictionaries using ruamel.yaml
with RoundTripRepresenter.
If I make chnages on a list, ruamel.yaml is able to make changes on the local variable, but it does not dump/write the changes into the file. Would it be possible to achive it?
Example config file:
live:
- name: Ethereum
networks:
- chainid: 1
explorer: https://api.etherscan.io/api
For example I changed name into alper and tried to append new item into the list.
my code:
class YamlUpdate:
def __init__(self):
self.config_file = Path.home() / "alper.yaml"
self.network_config = Yaml(self.config_file)
self.change_item()
def change_item(self):
for network in self.network_config["live"]:
if network["name"] == "Ethereum":
network["name"] = "alper"
print(network)
network.append("new_item")
yy = YamlUpdate()
print(yy.config_file.read_text())
output is where name remains unchanged on the original file:
{'name': 'alper', 'networks': [{'chainid': 1, 'explorer': 'https://api.etherscan.io/api'}]}
live:
- name: Ethereum
networks:
- chainid: 1
explorer: https://api.etherscan.io/api
I think you should look at making a class SubConfigList that behaves like a list but notifies its parent (in the datastructure), like in the other answer where SubConfig notifies its parent.
You'll also need to make sure to represent SubConfigList as a sequence into the YAML document.
If you ever going to have list at the root of your data structure, you'll need to have list-like alternative to the dict-like Config. (Or document for the consumers/users of your code that the root always needs to be a mapping).

Terraform Azure Devops Provider

We are trying to automate the Azure DevOps functions using Terraform. We are able to create Projects and Repos using Terraform. But we need to create multiple projects and repos specific to each project.
I have my terraform.tfvars file as given below
Proj1_Repos = ["Repo1","Repo2","Repo3"]
Proj2_Repos = ["Repo4","Repo5","Repo7"]
Project_Name = ["Proj1","Proj2"]
How i can write my terraform configuration file to create Proj1_Repos in Proj1 and Proj2_Repos in Proj2
I think you will have an easier time restructuring the variables to look something like:
"Projects" = {
"Proj1" = {
"repos" = ["Repo1","Repo2","Repo3"]
},
"Proj2" = {
"repos" = ["Repo4","Repo5","Repo6"]
}
}
This way you can more cleanly iterate over your declarations using the for_each operator for your devops repo resources.
Alternatively, if restructuring the input variables isn't an option, you can use the locals block to construct an association map for your variables. Something like this
If you are looking for a way to feed a variable value to reference another variable, you will not be able to do so without constructing a custom data object using the key and value of your variables. This route can get pretty wonky and not recommended.

Best practice for using variables to configure and create new Github repository instance in Terraform instead of updating-in-place

I am trying to set up a standard Github repository template for my organization that uses Terraform to spin up new repos with the configured settings.
Every time I try to update the configuration file to create a new instance of the repository with a new name, instead it will try to update-in-place any repo that was already created using that file.
My question is what is the best practice for making my configuration file reusable with input variables like repo name? Should I make a module or is there some way of reusing that file otherwise?
Thanks for the help.
Terraform is a desired-state-configuration system, which means that your configuration should represent the full set of objects that should exist rather than an instruction to create a single object.
Therefore the typical way to add a new repository is to add a new resource block declaring that new repository, and leave the existing ones unchanged. Terraform will then see that there's a new resource not currently tracked in the state and will propose to create it.
If your repositories are configured in some systematic way that you can describe using a mechanical rule rather than manual configuration then you can potentially use the for_each meta-argument to declare multiple resource instances from the same resource block, using Terraform language expressions to describe the systematic rule.
For example, you could create a local value with a higher-level data structure that describes what should be different between your repositories and then use that data structure with for_each on a single resource block:
locals {
repositories = tomap({
example_1 = {
description = "First example repository"
}
example_2 = {
description = "Second example repository"
}
})
}
resource "github_repository" "all" {
for_each = local.repositories
name = each.key
description = each.value.description
private = true
}
For simplicity in this example I've only made the name and description variable between the instances, but you can add whatever extra attributes you need for each of the elements of local.repositories and then access them via each.value inside the resource block.
The private argument above illustrates how this approach can avoid the need to re-state argument values that will be the same for each declared repository, and have your local.repositories data structure focus only on the minimum attributes needed to describe the variations you need for your local policies around GitHub repositories.
A resource block with for_each set appears as a map of objects when used in expressions elsewhere, using the same keys as in the map given in for_each. Therefore if you need to access the repository ids, or any other attribute of the systematically-declared objects, you can write Terraform expressions that work with maps. For example, if you want to output all of the repository ids as a map of strings:
output "repository_ids" {
value = tomap({
for k, r in github_repository.all : k => r.repo_id
})
}

How to pass schema file as Macros to BigQuery sink in data fusion

I am creating a data fusion pipeline to load csv data from GCS to BigQuery for my use case i need to create a property macros and provide the value during runtime. Need to understand how we can pass the schema file as Macros to BigQuery sink.
If i simply pass the json schema file path to Macros values i am getting the following error.
java.lang.IllegalArgumentException: Invalid schema: Use JsonReader.setLenient(true) to accept malformed JSON at line 1 column 1
There is currently no way to use the contents of a file as a macro value, though there is a jira open for something like this (https://issues.cask.co/browse/CDAP-15424). It is expected that the schema contents should be set as macro value. The UI currently doesn't handle these types of macro values very well (https://issues.cask.co/browse/CDAP-15423), so I would suggest setting it through the REST endpoint (https://docs.cdap.io/cdap/6.0.0/en/reference-manual/http-restful-api/preferences.html#H2290), where the app name is the pipeline name.
Alternatively, you can make your pipeline a little more generic by writing an Action plugin that looks something like:
#Override
public void run(ActionContext context) throws Exception {
String schema = readFileContents();
context.getArguments().setArgument(key, schema);
}
The plugin would be the first stage in your pipeline, and would allow subsequent stages in your pipeline to use ${key} as a macro that would be replaced with the actual schema.
If you are using BatchSink
You can read in the
#Override
public void prepareRun(BatchSinkContext context) {
by something like:
String token =
Objects.requireNonNull(
context.getArguments().get("token"),
"Argument Setter has failed in initializing the \"token\" argument.");
HTTPSinkConfig.setToken(token);

How to get paths for relocatable schemas in Gio.Settings?

In Gio.Settings I can list relocatable schemas using
Gio.Settings.list_relocatable_schemas()
and I can use
Gio.Settings.new_with_path(schema_id, path)
to get a Gio.Settings instance. But how can I get all value for path that are currently used for a given schema_id?
Normally, a schema has as fixed path that determines where the
settings are stored in the conceptual global tree of settings.
However, schemas can also be ‘relocatable’, i.e. not equipped with a
fixed path. This is useful e.g. when the schema describes an
‘account’, and you want to be able to store a arbitrary number of
accounts.
Isn't the new_with_path just for that? You have to store the schemas somewhere associated with accounts, but that is not the responsibility of the Settings system. I think new_with_path is for the case where your schemas depend on accounts.
I think you can find more information with GSettingsSchemas - this is an example in the Description for a case where the Schema is part of a plugin.
Unfortunately you cannot do it from Gio.Settings.
I see two options here:
Keep separate gsetting to store paths of relocatable schemas
Utilize dconf API, which is a low-level configuration system. Since there is no Python binding (guessing it's Python question) I suggest using ctypes for binding with C.
If you know the root path of your relocatable schemas you can use below snippet list them.
import ctypes
from ctypes import Structure, POINTER, byref, c_char_p, c_int, util
from typing import List
class DconfClient:
def __init__(self):
self.__dconf_client = _DCONF_LIB.dconf_client_new()
def list(self, directory: str) -> List[str]:
length_c = c_int()
directory_p = c_char_p(directory.encode())
result_list_c = _DCONF_LIB.dconf_client_list(self.__dconf_client, directory_p, byref(length_c))
result_list = self.__decode_list(result_list_c, length_c.value)
return result_list
def __decode_list(self, list_to_decode_c, length):
new_list = []
for i in range(length):
# convert to str and remove slash at the end
decoded_str = list_to_decode_c[i].decode().rstrip("/")
new_list.append(decoded_str)
return new_list
class _DConfClient(Structure):
_fields_ = []
_DCONF_LIB = ctypes.CDLL(util.find_library("dconf"))
_DCONF_LIB.dconf_client_new.argtypes = []
_DCONF_LIB.dconf_client_new.restype = POINTER(_DConfClient)
_DCONF_LIB.dconf_client_new.argtypes = []
_DCONF_LIB.dconf_client_list.argtypes = [POINTER(_DConfClient), c_char_p, POINTER(c_int)]
_DCONF_LIB.dconf_client_list.restype = POINTER(c_char_p)
You can't, at least not for an arbitrary schema, and this is by definition of what a relocatable schema is: a schema that can have multiple instances, stored in multiple arbitrary paths.
Since a relocatable schema instance can be stored basically anywhere inside DConf, gsettings has no way to list their paths, it does not keep track of instances. And dconf can't help you either, as it has no notion of schemas at all, it only knows about paths and keys. It can list the subpaths of a given path, but that's about it.
It's up for the application, when creating multiple instances of a given relocatable schema, to store each instance in a sensible, easily discoverable path, such as a subpath of the (non-relocatable) application schema. Or store the instance paths (or suffixes) as a list key in such schema.
Or both, like Gnome Terminal does with its profiles:
org.gnome.Terminal.ProfilesList is a non-relocatable, regular schema, stored at DConf path /org/gnome/terminal/legacy/profiles:/
That schema has 2 keys, a default string with a single UUID, and a list list of strings containing UUIDs.
Each profile is an instance of the relocatable schema org.gnome.Terminal.Legacy.Profile, and stored at, you guess... /org/gnome/terminal/legacy/profiles:/:<UUID>/!
This way a client can access all instances using either gsettings, reading list and building the paths from the UUIDs, or from dconf, by directly listing the subpaths of /org/gnome/terminal/legacy/profiles:/.
And, of course, for non-relocatable schemas you can always get their paths with:
gsettings list-schemas --print-paths