Using terraform to fetch entity name under alias - hashicorp-vault

I am trying to fetch all the entity names using data source vault_identity_entity, however unable to fetch the name of entity located under aliases.
Sample code:
'''
data “vault_identity_group” “group” {
group_name = “vaultadmin”
}
data “vault_identity_entity” “entity” {
for_each = toset(data.vault_identity_group.group.member_entity_ids)
entity_id = each.value
}
data “null_data_source” “values” {
for_each = data.vault_identity_entity.entity
inputs = {
ssh_user_details = lookup(jsondecode(data.vault_identity_entity.entity[each.key].data_json),“name”,{})
}
}
"data_json": "{\"aliases\":[{\"canonical_id\":\"37b4c764-a4ec-dcb7-c3c7-31cf9c51e456\",\"creation_time\":\"2022-07-20T08:53:36.553988277Z\",\"custom_metadata\":null,\"id\":\"59fb8a9c-1c0c-0591-0f6e-1a153233e456\",\"last_update_time\":\"2022-07-20T08:53:36.553988277Z\",\"local\":false,\"merged_from_canonical_ids\":null,\"metadata\":null,\"mount_accessor\":\"auth_approle_12d1d8af\",\"mount_path\":\"auth/approle/\",\"mount_type\":\"approle\",\"name\":\"name.user#test.com\"}],\"creation_time\":\"2022-07-20T08:53:36.553982983Z\",\"direct_group_ids\":[\"e456cb46-2b51-737c-3277-64082352f47e\"],\"disabled\":false,\"group_ids\":[\"e456cb46-2b51-737c-3277-64082352f47e\"],\"id\":\"37b4c764-a4ec-dcb7-c3c7-31cf9c51e456\",\"inherited_group_ids\":[],\"last_update_time\":\"2022-07-20T08:53:36.553982983Z\",\"merged_entity_ids\":null,\"metadata\":null,\"name\":\"entity_ec5c123\",\"namespace_id\":\"root\",\"policies\":[]}",
Above scripts returns entity id entity_ec5c123. Any suggestions to retrieve the name field under aliases, which has users email id.

Maybe something like this?
data “vault_identity_group” “group” {
group_name = “vaultadmin”
}
data “vault_identity_entity” “entity” {
for_each = toset(data.vault_identity_group.group.member_entity_ids)
entity_id = each.value
}
locals {
mount_accessor = "auth_approle_12d1d8af"
# mount_path = "auth/approle/"
aliases = {for k,v in data.vault_identity_entity.entity : k => jsondecode(v.data_json, "aliases") }
}
data “null_data_source” “values” {
for_each = data.vault_identity_entity.entity
inputs = {
ssh_user_details = lookup({for alias in lookup(local.aliases, each.key, "ent_missing") : alias.mount_accessor => alias.name}, local.mount_accessor, "ent_no_alias_on_auth_method")
}
}
Basically you want to do a couple lookups here, you can simplify this if you can guarantee that each entity will only have a single alias, but otherwise you should probably be looking up the alias for a specific mount_accessor and discarding the other entries.
Haven't really done a bunch of testing with this code, but you should be able to run terraform console after doing an init on your workspace and figure out what the data structs look like if you have issues.

Related

How can i create multiple fargate profiles with single namespace and different labels using terraform?

I am trying to create fargate profiles for EKS using terraform, the requirement is to create multiple fargate profiles bound to single namespace but different label.
I have defined the selector variable as below :
variable "selectors" {
description = "description"
type = list(object({
namespace = string
labels = any
}))
default = []
}
and the fargate module block as below :
resource "aws_eks_fargate_profile" "eks_fargate_profile" {
for_each = {for namespace in var.selectors: namespace.namespace => namespace}
cluster_name = var.cluster_name
fargate_profile_name = format("%s-%s","fargate",each.value.namespace)
pod_execution_role_arn = aws_iam_role.eks_fargate_role.arn
subnet_ids = var.vpc_subnets
selector {
namespace = each.value.namespace
labels = each.value.labels
}
and calling the module as below :
selectors = [
{
namespace = "ns"
labels = {
Application = "fargate-1"
}
},
{
namespace = "ns"
labels = {
Application = "fargate-2"
}
}
]
When i try to run terraform plan, i am getting below error :
Two different items produced the key "jenkinsbuild" in this 'for' expression. If duplicates are expected, use the ellipsis (...) after the value expression to enable grouping by key.
I tried giving (...) at the end of the for loop, this time i am getting another error as below :
each.value is tuple with 1 element
│
│ This value does not have any attributes.
I also defined selectors variable type as any, as well tried type casting the output to string(namespace) and object(labels), but no luck.
So could you please help me in achieving the same, It seems i am close but i am missing something here.
Thanks and Regards,
Sandeep.
In Terraform, when using for_each, the keys must be unique. If you do not have unique keys, then use count:
resource "aws_eks_fargate_profile" "eks_fargate_profile" {
count = length(var.selectors)
selector {
namespace = var.selectors[count.index].namespace
labels = var.selectors[count.index].labels
}
...
}

How to filter a data source (AWS AMI) based on a list of tags

I'm trying to create an aws_ami data source that fetches the latest AMI based on a few tags.
The catch is that I want to do it with a map of tags and their values, not by defining filters for each specific tag in the data source.
Example:
module-vars.tf
variable "filter-tags" {
type = "map"
default = {
"java_vendor" = "oracle"
}
}
module.tf
data "aws_ami" "aws-ami" {
most_recent = true
owners = ["self"]
// Filter code here
// e.g. FICTIONAL CODE, DON'T USE
filter {
name = "tags:${var.filter-tags}"
}
}
So obviously this filter-tags variable should be able to change and the filtered AMI should have all the tags matching.
Any ideas?
Found a way to do it with dynamic blocks
data "aws_ami" "aws-ami" {
most_recent = true
owners = ["self"]
dynamic "filter" {
for_each = var.filter-tags
iterator = tag
content {
name = "tag:${tag.key}"
values = ["${tag.value}"]
}
}
}

Build dynamic LINQ queries from a string - Use Reflection?

I have some word templates(maybe thousands). Each template has merge fields which will be filled from database. I don`t like writing separate code for every template and then build the application and deploy it whenever a template is changed or a field on the template is added!
Instead, I'm trying to define all merge fields in a separate xml file and for each field I want to write the "query" which will be called when needed. EX:
mergefield1 will call query "Case.Parties.FirstOrDefault.NameEn"
mergefield2 will call query "Case.CaseNumber"
mergefield3 will call query "Case.Documents.FirstOrDefault.DocumentContent.DocumentType"
Etc,
So, for a particular template I scan its merge fields, and for each merge field I take it`s "query definition" and make that request to database using EntityFramework and LINQ. Ex. it works for these queries: "TimeSlots.FirstOrDefault.StartDateTime" or
"Case.CaseNumber"
This will be an engine which will generate word documents and fill it with merge fields from xml. In addition, it will work for any new template or new merge field.
Now, I have worked a version using reflection.
public string GetColumnValueByObjectByName(Expression<Func<TEntity, bool>> filter = null, string objectName = "", string dllName = "", string objectID = "", string propertyName = "")
{
string objectDllName = objectName + ", " + dllName;
Type type = Type.GetType(objectDllName);
Guid oID = new Guid(objectID);
dynamic Entity = context.Set(type).Find(oID); // get Object by Type and ObjectID
string value = ""; //the value which will be filled with data from database
IEnumerable<string> linqMethods = typeof(System.Linq.Enumerable).GetMethods(BindingFlags.Static | BindingFlags.Public).Select(s => s.Name).ToList(); //get all linq methods and save them as list of strings
if (propertyName.Contains('.'))
{
string[] properies = propertyName.Split('.');
dynamic object1 = Entity;
IEnumerable<dynamic> Child = new List<dynamic>();
for (int i = 0; i < properies.Length; i++)
{
if (i < properies.Length - 1 && linqMethods.Contains(properies[i + 1]))
{
Child = type.GetProperty(properies[i]).GetValue(object1, null);
}
else if (linqMethods.Contains(properies[i]))
{
object1 = Child.Cast<object>().FirstOrDefault(); //for now works only with FirstOrDefault - Later it will be changed to work with ToList or other linq methods
type = object1.GetType();
}
else
{
if (linqMethods.Contains(properies[i]))
{
object1 = type.GetProperty(properies[i + 1]).GetValue(object1, null);
}
else
{
object1 = type.GetProperty(properies[i]).GetValue(object1, null);
}
type = object1.GetType();
}
}
value = object1.ToString(); //.StartDateTime.ToString();
}
return value;
}
I`m not sure if this is the best approach. Does anyone have a better suggestion, or maybe someone has already done something like this?
To shorten it: The idea is to make generic linq queries to database from a string like: "Case.Parties.FirstOrDefault.NameEn".
Your approach is very good. I have no doubt that it already works.
Another approach is using Expression Tree like #Egorikas have suggested.
Disclaimer: I'm the owner of the project Eval-Expression.NET
In short, this library allows you to evaluate almost any C# code at runtime (What you exactly want to do).
I would suggest you use my library instead. To keep the code:
More readable
Easier to support
Add some flexibility
Example
public string GetColumnValueByObjectByName(Expression<Func<TEntity, bool>> filter = null, string objectName = "", string dllName = "", string objectID = "", string propertyName = "")
{
string objectDllName = objectName + ", " + dllName;
Type type = Type.GetType(objectDllName);
Guid oID = new Guid(objectID);
object Entity = context.Set(type).Find(oID); // get Object by Type and ObjectID
var value = Eval.Execute("x." + propertyName, new { x = entity });
return value.ToString();
}
The library also allow you to use dynamic string with IQueryable
Wiki: LINQ-Dynamic

What is the relationship between real collections' names and those in file system

I use MongoDB 3.0 with WiredTiger storage engine.
When I checked my Mongo files in dbPath, I saw the names of the files with the formats as below:
collection-0--4989330656807016483.wt
collection-2--4989330656807016483.wt
collection-4--4989330656807016483.wt
.
.
.
How can I know the relationship between these file names and real collections' names except the way of data size??
I have found the way that the command "db.collection.stats()" would show the wiredTiger.metadata.uri which defines the relationship between the collection's logical name and file name the command
A simple script to find the collection name for a given file name:
function findInDb(dbName, collectionIdToFind) {
var dbToSearch = db.getSiblingDB(dbName);
var collectionNames = dbToSearch.getCollectionNames();
for(var i = 0; i < collectionNames.length; i++){
var name = collectionNames[i];
var stats = dbToSearch.getCollection(name).stats();
var uri = stats.wiredTiger.uri;
if (uri.endsWith(collectionIdToFind))
return name;
}
return null;
}
function findInAllDbs(collectionIdToFind) {
var adminDb = db.getSiblingDB("admin");
var dbList = adminDb.runCommand({ "listDatabases": 1 }).databases;
for (var i in dbList) {
var found = findInDb(dbList[i].name, collectionIdToFind);
if (found != null) {
return dbList[i].name + "." + found;
}
}
return "(not found)";
}
print(findInAllDbs("collection-20-571885508699163146")); // filename in crash report, etc.
Use this
db.getCollection('collection_name').stats()

CRM 2011 : Plugin to create duplicate records

i created a simple plugin to create a duplicate record that refers to the parent record.
Here is my code
var pluginExecutionContext = localContext.PluginExecutionContext;
IOrganizationService service = localContext.OrganizationService;
abc= pluginExecutionContext.InputParameters["Target"] as Entity;
if (pluginExecutionContext.Depth == 1)
{
Guid abcId = abc.Id;
Entity abcCopy = new Entity("mcg_abc");
if (abc.Attributes.Contains("mcg_abccategoryoptioncode"))
{
abcCopy.Attributes["mcg_abccategoryoptioncode"] = abc.GetAttributeValue<OptionSetValue>("mcg_abccategoryoptioncode");
}
if (abc.Attributes.Contains("mcg_effectivedate"))
{
abcCopy.Attributes["mcg_effectivedate"] = isp.GetAttributeValue<DateTime>("mcg_effectivedate");
}
if (abc.Attributes.Contains("mcg_startdate"))
{
abcCopy.Attributes["mcg_startdate"] = isp.GetAttributeValue<DateTime>("mcg_startdate");
}
if (abc.Attributes.Contains("mcg_enddate"))
{
abcCopy.Attributes["mcg_enddate"] = isp.GetAttributeValue<DateTime>("mcg_enddate");
}
if (abc.Attributes.Contains("mcg_amendeddate"))
{
abcCopy.Attributes["mcg_amendeddate"] = isp.GetAttributeValue<DateTime>("mcg_amendeddate");
}
if ((abc.GetAttributeValue<OptionSetValue>("mcg_abccategoryoptioncode").Value) == 803870001)
{
//Some more fields;
}
else
{
//Some more fields;
}
// SOme more fields;
abcCopy.Attributes["mcg_parentabc"] = new EntityReference("mcg_abc", abc.Id);
service.Create(abcCopy);
}
Now the problem is all the fields before the below check are getting copied
if ((abc.GetAttributeValue<OptionSetValue>("mcg_abccategoryoptioncode").Value) == 803870001)
However fields after this check are not getting copied.
Please if anybody could suggest what mistake i have made.
In case you take field from Target - this field was updated on a client side. In case field was not updated - it would not be in Target. You should use Images to get values of unchanged fields.
The field must be empty so a exception may arise. Try to use the plugin image or change your code to this way:
if (abc.Attributes.Contains("mcg_abccategoryoptioncode")){
if ((abc.GetAttributeValue<OptionSetValue>("mcg_abccategoryoptioncode").Value) == 803870001)
....