Play Framework: How to read an entire configuration section consisting of unknown keys - scala

Here below is how I'd like to configure security profiles for my Play application – each entry in auth.securityProfiles consists of an Operation => Roles pair:
auth {
securityProfiles {
myOperation1 = "author, auditor"
myOperation2 = "admin"
myOperationN = "auditor, default"
}
}
How do I read all the entries in section auth.securityProfiles to produce a Map like this?
val securityProfiles = Map(
"myOperation1" -> "author, auditor",
"myOperation2" -> "admin",
"myOperationN" -> "auditor, default"
)
Thanks.

Here is my solution... I've just modified the configuration like this...
auth {
securityProfiles = [
{
operation = "myOperation1"
roles = ["author", "auditor"]
}
{
operation = "myOperation2"
roles = ["admin"]
}
{
operation = "myOperationN"
roles = ["auditor", "default"]
}
]
}
... and then read it with the following code snipper:
import scala.collection.mutable.Map
var securityProfiles = Map[String, List[String]]().withDefaultValue(List.empty)
configuration.getConfigList("auth.securityProfiles").map { _.toList.map { config =>
config.getString("operation").map { op =>
securityProfiles += (op -> config.getStringList("roles").map(_.toList).getOrElse(List.empty))
}
}}
I hope that helps.

Related

OCI: Create nodes in Kubernetes nodepool with bastion agent configured

I'm trying to deploy a Kubernetes cluster in Oracle Cloud Infrastructure using Terraform.
I want that every node deployed (in private subnet) has the Bastion agent plugin activate in Cloud Agent.
But I cannot see how to define the details of the instance (setting agent_config in the node pool instances).
My code, until now is:
resource "oci_containerengine_cluster" "generated_oci_containerengine_cluster" {
compartment_id = var.cluster_compartment
endpoint_config {
is_public_ip_enabled = "true"
subnet_id = oci_core_subnet.oke_public_api.id
}
kubernetes_version = var.kubernetes_version
name = "josealbarran_labcloudnative_oke"
options {
kubernetes_network_config {
pods_cidr = "10.244.0.0/16"
services_cidr = "10.96.0.0/16"
}
service_lb_subnet_ids = [oci_core_subnet.oke_public_lb.id]
}
vcn_id = var.cluster_vcn
}
# Check doc: https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/containerengine_node_pool
resource "oci_containerengine_node_pool" "node_pool01" {
cluster_id = "${oci_containerengine_cluster.generated_oci_containerengine_cluster.id}"
compartment_id = var.cluster_compartment
initial_node_labels {
key = "name"
value = "pool01"
}
kubernetes_version = var.kubernetes_version
name = "lab_cloud_native_oke_pool01"
node_config_details {
size = "${length(data.oci_identity_availability_domains.ads.availability_domains)}"
dynamic "placement_configs" {
for_each = data.oci_identity_availability_domains.ads.availability_domains[*].name
content {
availability_domain = placement_configs.value
subnet_id = oci_core_subnet.oke_private_worker.id
}
}
}
node_shape = "VM.Standard.A1.Flex"
node_shape_config {
memory_in_gbs = "16"
ocpus = "1"
}
node_source_details {
image_id = "ocid1.image.oc1.eu-frankfurt-1.aaaaaaaalgodii3qx3mfasp6ai22bja7mabfwsxiwkzxx7lhdfdbbuyqcznq"
source_type = "IMAGE"
}
ssh_public_key = "ssh-rsa AAAAB3xxxxxxxx......."
timeouts {
create = "60m"
delete = "90m"
}
}
You can use the "cloudinit_config" to run the custom script in OKE node pool in OCI.
second_script_template = templatefile("${path.module}/cloudinit/second.template.sh",{})
More scripts like
data "cloudinit_config" "worker" {
gzip = false
base64_encode = true
part {
filename = "worker.sh"
content_type = "text/x-shellscript"
content = local.worker_script_template
}
part {
filename = "second.sh"
content_type = "text/x-shellscript"
content = local.second_script_template
}
part {
filename = "third.sh"
content_type = "text/x-shellscript"
content = local.third_script_template
}
}
Refer : https://github.com/oracle-terraform-modules/terraform-oci-oke/blob/main/docs/instructions.adoc#14-configuring-cloud-init-for-the-nodepools
If you are looking forward to just edit the default script : https://github.com/oracle-terraform-modules/terraform-oci-oke/blob/main/docs/cloudinit.adoc

Groovy/Jenkins: how to refactor sh(script:"curl ...") to URL?

My Jenkins pipeline currently successfully invokes Bitbucket REST API by invoking curl in a shell, as in the code below:
// Jenkinsfile
#Library('my-sandbox-libs#dev') my_lib
pipeline {
agent any
stages {
stage( "1" ) { steps { script { echo "hello" } } }
stage( "2" ) {
steps {
script {
log = new org.log.Log()
def cred_id = "bitbucket_cred_id"
def url_base = "https://bitbucket.company.com"
def commit = "76136485c45df256a62cbc2c3c5f1f3efcc86258"
def status =
//"INPROGRESS",
//"SUCCESSFUL",
"FAILED"
def viz_url = "https://path/to/nowhere"
try {
my_lib.notifyBitbucketBuildStatus(cred_id,
url_base,
commit,
status,
"foo",
42,
viz_url,
log)
}
}
}
}
stage( "3" ) { steps { script { echo "world" } } }
}
post { always { script { echo log.asJsonString() } } }
}
import groovy.json.JsonOutput
def notifyBitbucketBuildStatus(cred_id,
url_base,
commit,
build_state,
build_info_name,
build_info_number,
viz_url,
log) {
def rest_path = "rest/build-status/1.0/commits"
def dict = [:]
dict.state = build_state
dict.key = "${build_info_name}_${build_info_number}"
dict.url = viz_url
withCredentials([string(credentialsId: cred_id,
variable: 'auth_token')]) {
def cmd = "curl -f -L " +
"-H \"Authorization: Bearer ${auth_token}\" " +
"-H \"Content-Type:application/json\" " +
"-X POST ${url_base}/${rest_path}/${commit} " +
"-d \'${JsonOutput.toJson(dict)}\'"
if ( 0 != sh(script: cmd, returnStatus: true) ) {
log.warn("Failed updating build status with Bitbucket")
}
}
}
I would like to refactor function notifyBitbucketBuildStatus() to use a "native" Groovy-language solution, rather than invoking curl in a shell. I read the following on this topic:
https://www.baeldung.com/groovy-web-services
Groovy built-in REST/HTTP client?
...based on which I thought the refactored function would look like this:
def notifyBitbucketBuildStatus(cred_id,
url_base,
commit,
build_state,
build_info_name,
build_info_number,
viz_url,
log) {
def rest_path = "rest/build-status/1.0/commits"
def dict = [:]
dict.state = build_state
dict.key = "${build_info_name}_${build_info_number}"
dict.url = viz_url
def req = new URL("${url_base}/${rest_path}/${commit}").openConnection()
req.setRequestMethod("POST")
req.setDoOutput(true)
req.setRequestProperty("Content-Type", "application/json")
withCredentials([string(credentialsId: cred_id,
variable: 'auth_token')]) {
req.setRequestProperty("Authorization", "Bearer ${auth_token}")
}
def msg = JsonOutput.toJson(dict)
req.getOutputStream().write(msg.getBytes("UTF-8"));
if ( 200 != req.getResponseCode() ) {
log.warn("Failed updating build status with Bitbucket")
}
}
...but this generates the exception java.io.NotSerializableException: sun.net.www.protocol.https.HttpsURLConnectionImpl
That "not serializable" made me think the error had something to do with a failure to transform something to a string, so I also tried this, but it did not change the error:
def msg = JsonOutput.toJson(dict).toString()
What is wrong with the refactored code that uses class URL, and what is the right way to use it to invoke the REST API?
For the life of me, I can't see what's different between the above and the linked Stack Overflow Q&A, and my inexperience with the language is such that I rely largely on adapting existing example.
Solution
I would highly suggest you use the HTTP Request and the Pipeline Steps Utility plugin for this. You can then use those steps in a Groovy script as follows
node('master') {
withCredentials([string(credentialsId: cred_id, variable: 'auth_token')]) {
def response = httpRequest url: "https://jsonplaceholder.typicode.com/todos", customHeaders: [[name: 'Authorization', value: "Bearer ${auth_token}"]]
}
if( response.status != 200 ) {
error("Service returned a ${response.status}")
}
def json = readJSON text: response.content
println "The User ID is ${json[0]['userId']}"
println "The follow json obj is ${json}"
}
Obviously you can modify the code if you want to build a method, and you will need to update with the appropriate URL.
I found a sucky and unsatisfying answer - but an answer nonetheless - that I posted here: https://stackoverflow.com/a/69486890/5437543
I hate that solution because it would appear to demonstrate that the Jenkins/Groovy language itself imposes an artificial contrivance on how I can organize my code. Effectively, I am prevented from doing
// Jenkinsfile
#Library('my-sandbox-libs#dev') my_lib
pipeline {
agent any
stages {
stage( "1" ) { steps { script { my_lib.func() } } }
}
}
// vars/my_lib.groovy
def func() {
def post = new URL("https://whatever").openConnection();
...
withCredentials([string(credentialsId: cred_id,
variable: 'auth_token')]) {
req.setRequestProperty("Authorization", "Bearer ${auth_token}")
}
...
}
...and I am forced to do
// Jenkinsfile
#Library('my-sandbox-libs#dev') my_lib
pipeline {
agent any
stages {
stage( "1" ) { steps { script { my_lib.func(my_lib.getCred()) } } }
}
}
// vars/my_lib.groovy
def getCred() {
withCredentials([string(credentialsId: cred_id,
variable: 'auth_token')]) {
return auth_token
}
}
def func(auth_token) {
def post = new URL("https://whatever").openConnection();
...
req.setRequestProperty("Authorization", "Bearer ${auth_token}")
...
}
Extremely dissatisfying conclusion. I hope another answerer can point out a solution that doesn't rely on this contrived code organization.

Tags format on Packer ec2-ami deployment

I'm trying out to create an amazon ec2 ami for the 1st time using Hashicorp Packer, however getting this failure on the tag creation, I already tried some retries on trial and error test for the format but still unlucky
[ec2-boy-oh-boy#ip-172-168-99-23 pogi]$ packer init .
Error: Missing item separator
on variables.pkr.hcl line 28, in variable "tags":
27: default = [
28: "environment" : "testing"
Expected a comma to mark the beginning of the next item.
My code ec2.pkr.hcl looks like this:
[ec2-boy-oh-boy#ip-172-168-99-23 pogi]$ cat ec2.pkr.hcl
packer {
required_plugins {
amazon = {
version = ">= 0.0.2"
source = "github.com/hashicorp/amazon"
}
}
}
source "amazon-ebs" "ec2" {
ami_name = "${var.ami_prefix}-${local.timestamp}"
instance_type = "t2.micro"
region = "us-east-1"
vpc_id = "${var.vpc}"
subnet_id = "${var.subnet}"
security_group_ids = ["${var.sg}"]
ssh_username = "ec2-boy-oh-boy"
source_ami_filter {
filters = {
name = "amzn2-ami-hvm-2.0*"
root-device-type = "ebs"
virtualization-type = "hvm"
}
most_recent = true
owners = ["12345567896"]
}
launch_block_device_mappings = [
{
"device_name": "/dev/xvda",
"delete_on_termination": true
"volume_size": 10
"volume_type": "gp2"
}
]
run_tags = "${var.tags}"
run_volume_tags = "${var.tags}"
}
build {
sources = [
"source.amazon-ebs.ec2"
]
}
[ec2-boy-oh-boy#ip-172-168-99-23 pogi]$
Then my code variables.pkr.hcl looks like this:
[ec2-boy-oh-boy#ip-172-168-99-23 pogi]$ cat variables.pkr.hcl
locals {
timestamp = regex_replace(timestamp(), "[- TZ:]", "")
}
variable "ami_prefix" {
type = string
default = "ec2-boy-oh-boy"
}
variable "vpc" {
type = string
default = "vpc-asd957d"
}
variable "subnet" {
type = string
default = "subnet-asd957d"
}
variable "sg" {
type = string
default = "sg-asd957d"
}
variable "tags" {
type = map
default = [
environment = "testing"
type = "none"
production = "later"
]
}
Your default value for the tags variable is of type list(string). Both the run_tags and run_volume_tags directives expect type map[string]string.
I was able to make the following changes to your variables file and run packer init successfully:
variable "tags" {
default = {
environment = "testing"
type = "none"
production = "later"
}
type = map(string)
}

Why do I get ACCESS_REFUSED using op-rabbit but not NewMotion/Akka?

Using these parameters:
canada {
hosts = ["dd.weather.gc.ca"]
username = "anonymous"
password = "anonymous"
port = 5671
exchange = "xpublic"
queue = "q_anonymous_gsk"
routingKey = "v02.post.observations.swob-ml.#"
requestedHeartbeat = 300
ssl = true
}
I can connect to a weather service in Canada using NewMotion/Akka, but when I try op-rabbit, I get:
ACCESS_REFUSED - access to exchange 'xpublic' in vhost '/' refused for user 'anonymous'
[INFO] [foo-akka.actor.default-dispatcher-7] [akka://foo/user/$a/connection] akka://foo/user/$a/connection connected to amqp://anonymous#{dd.weather.gc.ca:5671}:5671//
[INFO] [foo-op-rabbit.default-channel-dispatcher-6] [akka://foo/user/$a/connection/$a] akka://foo/user/$a/connection/$a connected
[INFO] [foo-akka.actor.default-dispatcher-4] [akka://foo/user/$a/connection/confirmed-publisher-channel] akka://foo/user/$a/connection/confirmed-publisher-channel connected
[INFO] [foo-akka.actor.default-dispatcher-4] [akka://foo/user/$a/connection/$b] akka://foo/user/$a/connection/$b connected
[ERROR] [foo-akka.actor.default-dispatcher-3] [akka://foo/user/$a/subscription-q_anonymous_gsk-1] Connection related error while trying to re-bind a consumer to q_anonymous_gsk. Waiting in anticipating of a new channel.
...
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=403, reply-text=ACCESS_REFUSED - access to exchange 'xpublic' in vhost '/' refused for user 'anonymous', class-id=40, method-id=10)
The following works in NewMotion/Akka:
val inQueue = "q_anonymous_gsk"
val inExchange = "xpublic"
val canadaQueue = canadaChannel.queueDeclare(inQueue, false, true, false, null).getQueue
canadaChannel.queueBind(canadaQueue, inExchange, inQueue)
val consumer = new DefaultConsumer(canadaChannel) {
override def handleDelivery(consumerTag: String, envelope: Envelope, properties: BasicProperties, body: Array[Byte]) {
val s = fromBytes(body)
if (republishElsewhere) {
// ...
}
}
}
canadaChannel.basicConsume(canadaQueue, true, consumer)
but using op-rabbit like this:
val inQueue = "q_anonymous_gsk"
val inExchange = "xpublic"
val inRoutingKey = "v02.post.observations.swob-ml.#""
val rabbitCanada: ActorRef = actorSystem.actorOf(Props(classOf[RabbitControl], connParamsCanada))
def runSubscription(): SubscriptionRef = Subscription.run(rabbitCanada) {
channel(qos = 3) {
consume(topic(queue(inQueue), List(inRoutingKey))) {
(body(as[String]) & routingKey) { (msg, key) =>
ack
}
}
}
}
I get the ACCESS_REFUSED error near the top of this post. Why? How do I fix this if I want to use op-rabbit?
Have you tried to use the correct vhost with permission to anonymous user

SSLHandshakeException happens during file upload to AWS S3 via Alpakka

I'm trying to setup an Alpakka S3 for files upload purpose. Here is my configs:
alpakka s3 dependency:
...
"com.lightbend.akka" %% "akka-stream-alpakka-s3" % "0.20"
...
Here is application.conf:
akka.stream.alpakka.s3 {
buffer = "memory"
proxy {
host = ""
port = 8000
secure = true
}
aws {
credentials {
provider = default
}
}
path-style-access = false
list-bucket-api-version = 2
}
File upload code example:
private val awsCredentials = new BasicAWSCredentials("my_key", "my_secret_key")
private val awsCredentialsProvider = new AWSStaticCredentialsProvider(awsCredentials)
private val regionProvider = new AwsRegionProvider { def getRegion: String = "us-east-1" }
private val settings = new S3Settings(MemoryBufferType, None, awsCredentialsProvider, regionProvider, false, None, ListBucketVersion2)
private val s3Client = new S3Client(settings)(system, materializer)
val fileSource = Source.fromFuture(ByteString("ololo blabla bla"))
val fileName = UUID.randomUUID().toString
val s3Sink: Sink[ByteString, Future[MultipartUploadResult]] = s3Client.multipartUpload("my_basket", fileName)
fileSource.runWith(s3Sink)
.map {
result => println(s"${result.location}")
} recover {
case ex: Exception => println(s"$ex")
}
When I run this code I get:
javax.net.ssl.SSLHandshakeException: General SSLEngine problem
What can be a reason?
The certificate problem arises for bucket names containing dots.
You may switch to
akka.stream.alpakka.s3.path-style-access = true to get rid of this.
We're considering making it the default: https://github.com/akka/alpakka/issues/1152