Create new line on TOML file - toml

I am trying to understand the TOML structure and
[[servers]]
ip = "10.0.0.1"
role = "frontend"
[[servers]]
ip = "10.0.0.2"
role = "backend"
developer = "developer_name"
If i parse the above the get the json as
{
"servers": [
{
"ip": "10.0.0.1",
"role": "frontend"
},
{
"developer": "developer_name",
"ip": "10.0.0.2",
"role": "backend"
}
]
}
As you can see the developer is nested into second object. But i need developer in the root.
I use this website to verify the TOML TOML Parser
expected result
{
"servers": [
{
"ip": "10.0.0.1",
"role": "frontend"
},
{
"ip": "10.0.0.2",
"role": "backend"
}
],
"developer": "developer_name"
}

Key/value pairs within Toml tables are not guaranteed to be in any specific order.
The way to get 'developer' in the root table is to place it before the 'servers' array of tables:
developer = "developer_name"
[[servers]]
ip = "10.0.0.1"
role = "frontend"
[[servers]]
ip = "10.0.0.2"
role = "backend"
This will result in this Json structure:
{
"developer": "developer_name",
"servers": [
{
"ip": "10.0.0.1",
"role": "frontend"
},
{
"ip": "10.0.0.2",
"role": "backend"
}]
}
This small example could also be formatted with an inline array:
servers = [{ip = "10.0.0.1", role = "frontend"}, {ip = "10.0.0.2", role = "backend"}]
developer = "developer_name"
That would result in the same Json structure.

Related

Problem in assigning roles to user while creating it with Post HTTP request

I can successfully create user by calling the following path in Postman software:
http://{KEYCLOAK_IP}/auth/admin/realms/{REALM_NAME}/users
The body content that I send is like following:
{
"enabled":true,
"username":"Reza",
"email":"reza#sampleMailServer1.com",
"firstName":"Reza",
"lastName":"Azad",
"credentials": [
{
"type":"password",
"value":"123",
"temporary":false
}
]
}
Now, let’s assume that we have a client, which is named browserApp and this client has a role, which is named borwserAppRoleUser. Also, the realm has a role, which is name realmRoleUser.
In order to include abovementioned roles in the body content of the HTTP request I tried the following structure:
{
"enabled":true,
"username":"Reza",
"email":"reza#sampleMailServer1.com",
"firstName":"Reza",
"lastName":"Azad",
"credentials": [
{
"type":"password",
"value":"123",
"temporary":false
}
],
"role": [
{
"id": "borwserAppRoleUser",
"name": "test",
"description": "${role_create-client}",
"composite": false,
"clientRole": true,
"containerId": "browserApp"
},
{
"id":"realmRoleUser",
"composite":false,
"clientRole":false
}
]
}
Sending the above body content results in 400 bad request response. The errors contains this message:
Unrecognized field "role" (class org.keycloak.representations.idm.UserRepresentation), not marked as ignorable
Also, I am sure that the rest of the role object is not correct.
I searched for examples online, but I could not find any sample regarding the role assignment. Can any body please help me to fix this problem?
REST API not supports realm & client roles by single JSON data.
It only support by Add Realm with JSON import
The simple JSON format is like this but it needs extra data.
This is working example for Import Realm JSON data
{
"id": "test",
"realm": "test",
"users": [
{
"enabled": true,
"username": "Reza",
"email": "reza#sampleMailServer1.com",
"firstName": "Reza",
"lastName": "Azad",
"credentials": [
{
"type": "password",
"value": "123",
"temporary": false
}
],
"realmRoles": [
"user"
],
"clientRoles": {
"borwserAppRoleUser": [
"test"
]
}
}
],
"scopeMappings": [
{
"client": "borwserAppRoleUser",
"roles": [
"test"
]
}
],
"client": {
"borwserAppRoleUser": [
{
"name": "test",
"description": "${role_create-client}"
}
]
},
"roles": {
"realm": [
{
"name": "user",
"description": "Have User privileges"
}
]
}
}
If you want to assign user's realm role and client role, use separate API call.
#1 Assign user's realm role
POST {KEYCLOAK-IP}/auth/admin/realms/{REALM-NAME}/users/{USER-UUID}/role-mappings/realm
In Body of POST
[
{
"id": {REALM ROLE UUID},
"name": {ROLE NAME},
"composite": false,
"clientRole": false,
"containerId": {REALM NAME}
}
]
1.1 Get master token - here
1.2 Get User UUID
1.3 Get Realm role UUID and name
1.4 POST realm role into user
#2 Assign user's client role
POST {KEYCLOAK-IP}/auth/admin/realms/{REALM-NAME}/users/{USER-UUID}/role-mappings/clients/{CLIENT-UUID}
In Body of POST
[
{
"id": {CLIENT ROLE ID},
"name": {ROLE NAME},
"description": "${role_create-client}",
"composite": false,
"clientRole": true,
"containerId": {CLIENT-UUID}
}
]
2.1 Get master token
2.2 Get user UUID - same 1.2
2.2 Get Client UUID
2.3 Get Client role UUID & name
2.4 POST client role into user
Finally confirm both assigned roles by this API
GET {KEYCLOAK-IP}/auth/admin/realms/{REALM-NAME}/users/{USER-UUID}/role-mappings

How to delete user by email id using azure SCIM api in databricks?

I need to know if there is a way to delete a user from databricks using email only using SCIM api? As of now I can see it can only delete user by ID which means I need to first retrive the ID of the user and then use it to delete.
I am using this api from powershell to delete users by email.
https://learn.microsoft.com/en-us/azure/databricks/dev-tools/api/latest/scim/scim-users
If you look into the documentation for Get Users command of SCIM Users REST API, you can see that you can specify the filtering condition for it. For example, to find specific user, you can filter on the userName attribute, like this:
GET /api/2.0/preview/scim/v2/Users?filter=userName+eq+example#databricks.com HTTP/1.1
Host: <databricks-instance>
Accept: application/scim+json
Authorization: Bearer dapi48…a6138b
it will return a list of items in the Resources section, from which you can extract user ID that you can use for delete operation:
{
"totalResults": 1,
"startIndex": 1,
"itemsPerPage": 1,
"schemas": [
"urn:ietf:params:scim:api:messages:2.0:ListResponse"
],
"Resources": [
{
"id": "8679504224234906",
"userName": "example#databricks.com",
"emails": [
{
"type": "work",
"value": "example#databricks.com",
"primary": true
}
],
"entitlements": [
{
"value": "allow-cluster-create"
},
{
"value": "databricks-sql-access"
},
{
"value": "workspace-access"
}
],
"displayName": "User 1",
"name": {
"familyName": "User",
"givenName": "1"
},
"externalId": "12413",
"active": true,
"groups": [
{
"display": "123",
"type": "direct",
"value": "13223",
"$ref": "Groups/13223"
}
]
}
]
}

Adding a custom tag based on topicName (wildcard) via using JmxTrans to send Kafka JMX to influxDb

Basically what i wanted to achieve was to get MessageInPerSec metric for all the topic in kafka and to add the custom tag as topicName in the influx db so as to query based on the topic not based on the 'ObjDomain' definition, below are my JmxTrans configuration, (Note using the wildcard for the topic as to fetch the data MessageInPerSec JMX attribute for all the topic)
{
"servers": [
{
"port": "9581",
"host": "192.168.43.78",
"alias": "kafka-metric",
"queries": [
{
"outputWriters": [
{
"#class": "com.googlecode.jmxtrans.model.output.InfluxDbWriterFactory",
"url": "http://192.168.43.78:8086/",
"database": "kafka",
"username": "admin",
"password": "root"
}
],
"obj": "kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=*",
"attr": [
"Count",
"MeanRate",
"OneMinuteRate",
"FiveMinuteRate",
"FifteenMinuteRate"
],
"resultAlias": "newTopic"
}
],
"numQueryThreads": 2
}
]
}
which yields a result in the Influx DB as follow
[name=newTopic, time=1589425526087, tags={attributeName=FifteenMinuteRate,
className=com.yammer.metrics.reporting.JmxReporter$Meter, objDomain=kafka.server,
typeName=type=BrokerTopicMetrics,name=MessagesInPerSec,topic=backblaze_smart},
precision=MILLISECONDS, fields={FifteenMinuteRate=1362.9446063537794, _jmx_port=9581
}]
and create tag with whole objDomain spefcified in the config, but i wanted to have topic as a seperate tag that is something as follow
[name=newTopic, time=1589425526087, tags={attributeName=FifteenMinuteRate,
className=com.yammer.metrics.reporting.JmxReporter$Meter, objDomain=kafka.server,
topic=backblaze_smart,
typeName=type=BrokerTopicMetrics,name=MessagesInPerSec,topic=backblaze_smart},
precision=MILLISECONDS, fields={FifteenMinuteRate=1362.9446063537794, _jmx_port=9581
}]
was not able to find any adequate documentation for the same on how to use the wildcard value of topic as a separate tag using jmxtrans and writing it to the InfluxDB.
You just need to add the following additional properties for Influx output writer. Just make sure you are using the latest version of jmxtrans release. The docs are here: https://github.com/jmxtrans/jmxtrans/wiki/InfluxDBWriter
"typeNames": ["topic"],
"typeNamesAsTags": "true"
I have listed your config with the above modifications.
{
"servers": [
{
"port": "9581",
"host": "192.168.43.78",
"alias": "kafka-metric",
"queries": [
{
"outputWriters": [
{
"#class": "com.googlecode.jmxtrans.model.output.InfluxDbWriterFactory",
"url": "http://192.168.43.78:8086/",
"database": "kafka",
"username": "admin",
"password": "root",
"typeNames": ["topic"],
"typeNamesAsTags": "true"
}
],
"obj": "kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=*",
"attr": [
"Count",
"MeanRate",
"OneMinuteRate",
"FiveMinuteRate",
"FifteenMinuteRate"
],
"resultAlias": "newTopic"
}
],
"numQueryThreads": 2
}
]
}

Permission issue for an ECS Service to use an ALB

I am trying to deploy an ECS stack with an ALB using cloudformation, and i get an error at the Service creation, which seems to be a missing permission to access the load balancer.
Here is the error: Unable to assume role and validate the specified targetGroupArn. Please verify that the ECS service role being passed has the proper permissions.
Here is the service definition:
"EcsService": {
"Type":"AWS::ECS::Service",
"DependsOn": [
"loadBalancer",
"EcsServiceRole"
],
"Properties":{
"Cluster":{
"Ref": "EcsCluster"
},
"DesiredCount":"1",
"DeploymentConfiguration":{
"MaximumPercent":100,
"MinimumHealthyPercent":0
},
"LoadBalancers": [
{
"ContainerName": "test-web",
"ContainerPort": "80",
"TargetGroupArn" : {
"Ref": "loadBalancer"
},
}
],
"Role":{
"Ref": "EcsServiceRole"
},
"TaskDefinition":{
"Ref": "runWebServerTaskDefinition"
}
}
}
Here is the Load Balancer definition:
"loadBalancer" : {
"Type": "AWS::ElasticLoadBalancingV2::LoadBalancer",
"Properties": {
"Name": "testalb",
"Scheme" : "internal",
"Subnets" : [
"subnet-b8217295",
"subnet-ddaad2b8",
"subnet-6d71fb51"
],
"LoadBalancerAttributes" : [
{ "Key" : "idle_timeout.timeout_seconds", "Value" : "50" }
],
"SecurityGroups": [
{ "Ref": "InstanceSecurityGroupOpenWeb" },
{ "Ref" : "InstanceSecurityGroupOpenFull" }
],
"Tags" : [
{ "Key" : "key", "Value" : "value" },
{ "Key" : "key2", "Value" : "value2" }
]
}
}
Here is the IAM role the service should use:
"EcsServiceRole": {
"Type":"AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Statement":[
{
"Effect":"Allow",
"Principal":{
"Service":[
"ecs.amazonaws.com"
]
},
"Action":[
"sts:AssumeRole"
]
}
]
},
"Path":"/",
"Policies":[
{
"PolicyName":"ecs-service",
"PolicyDocument":{
"Statement":[
{
"Effect":"Allow",
"Action":[
"elasticloadbalancing:*",
"ec2:*"
],
"Resource":"*"
}
]
}
}
]
}
}
I didn't find if there is a specific namespace for ALB in IAM.
Do you have an idea?
TargetGroupArn should be pointing to TargetGroup ARN, not ALB ARN, Currently, it is pointed to Load Balancer ARN.
"TargetGroupArn" : {
"Ref": "loadBalancer"
},
UPDATE:
As of July 19th 2018, it is now possible to create a IAM Service-Linked Roles using CloudFormation https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-servicelinkedrole.html.
EcsServiceLinkedRole:
Type: "AWS::IAM::ServiceLinkedRole"
Properties:
AWSServiceName: "ecs.amazonaws.com"
Description: "Role to enable Amazon ECS to manage your cluster."
OLD ANSWER:
Since AWS introduced Service-Linked Roles, I no longer specify a role for my AWS::ECS::Service. It will default to the service linked role which has all the necessary permissions.

chef-solo not updating postgres pg_hba.conf

I am using Chef Solo to provision a Vagrant Virtual Machine. Here is the relevant Vagrantfile snippet:
chef.run_list = [
"databox::default",
"mydbstuff"
]
chef.json = {
"postgresql": {
"config" : {
"listen_addresses": "*"
},
"pg_hba": [
{"type": "local", "db": "all", "user": "postgres", "addr": null, "method": "ident"},
{"type": "local", "db": "all", "user": "all", "addr": null, "method": "md5"},
{"type": "host", "db": "all", "user": "all", "addr": "127.0.0.1/32", "method": "md5"},
{"type": "host", "db": "all", "user": "all", "addr": "::1/128", "method": "md5"},
{"type": "local", "db": "all", "user": "vagrant", "addr": null, "method": "ident"},
{"type": "host", "db": "all", "user": "all", "addr": "192.168.248.1/24", "method": "md5"}
]
},
"databox": {
"db_root_password": "abc123",
"databases": {
"postgresql": [
{ "username": "db1", "password": "abc123", "database_name": "db1" },
{ "username": "db2", "password": "abc123", "database_name": "db2" }
]
}
}
}
The mydbstuff::default recipe looks like this:
postgresql_connection_info = {
:host => "localhost",
:port => node['postgresql']['config']['port'],
:username => 'postgres',
:password => node['postgresql']['password']['postgres']
}
postgresql_database_user 'vagrant' do
connection postgresql_connection_info
password 'vagrant'
action :create
end
node['databox']['databases']['postgresql'].each do |db|
postgresql_database_user 'vagrant' do
connection postgresql_connection_info
action :grant
database_name db.database_name
end
end
I am trying to allow connections by the local vagrant user without a password, and by any user from the VirtualBox private network. The pg_hba array in my chef.json has four lines that are copied from the default configuration and two lines to do the other stuff that I want to do. If I add these two lines to the pg_hba.conf file manually, they work just fine.
The problem is that my changes aren't actually written to the pg_hba.conf file. What's preventing them from being written?
It appears that the databox cookbook overwrites the Postgres permissions array using node.set instead of just modifying the part that it needs.
I have submitted a pull request to the project to change this behavior so that additional entries can be added to the file.
I faced same problem with chef-solo. My way out was to create a template for pg_hba.conf and replaced at the end of execution of recipe.