Using Mirth Connect Destination Mappings for AWS Access Key Id results in Error - mirth

We use vault to store our credentials, I've successfully grabbed S3 Access key ID and Secret Access key using the vault API, and used channelMap.put to create mappings: ${access_key} and ${secret_key}.
aws_s3_file_writer
However when I use these in the S3 file writer I get the error:
"The AWS Access Key Id you provided does not exist in our records."
I know the Access Key Id is valid, it works if I plug it in directly in the S3 file writer destination.
I'd appreciate any help on this. thank you.
UPDATE: I had to convert the results to a string, that fixed it.

You can try using the variable to a higher map. You can use globalChannelMap, globalMap or configurationMap. I would use this last one since it can store password not in plain text mode. You are currently using a channelMap, it scope is only applied to the current message while it is traveling through the channel.
You can check more about variable maps and their scopes in Mirth User guide, Section Variable Maps, page 393. I think that part of the manual is really important to understand.

See my comment, it was a race condition between Vault, Mirth and AWS.

Related

KMS KeyPolicy for CloudTrail read/write and EventBridge read?

I have the following resources in a CDK project:
from aws_cdk import (
aws_cloudtrial as cloudtrail,
aws_events as events,
aws_events_targets as targets,
aws_kms as kms
)
#Create a Customer-Managed Key (CMK) for encrypting the CloudTrail logs
mykey = kms.Key(self, "key", alias="somekey")
#Create a CloudTrail Trail, an S3 bucket, and a CloudWatch Log Group
trail = cloudtrail.Trail(self, "myct", send_to_cloud_watch_logs=True, management_events=cloudtrail.ReadWriteType.WRITE_ONLY)
#Create an EventBridge Rule to do something when certain events get matched in the CloudWatch Log Group
rule = events.Rule(self, "rule", event_pattern=events.eventPattern(
#the contents of the eventPattern don't matter for this example
), targets= [
#the contents of the targets don't matter either
])
The problem is, if I pass my key to the trail with the encryption_key=mykey parameter, CloudTrail complains that it can't use the key.
I've tried many different KMS policies, but other than making it wide open to the entire world, I can't figure out how to enable my CloudTrail Trail to read/write using the key (it has to put data into the S3 bucket), and allow CloudWatch and EventBridge to decrypt the encrypted data in the S3 bucket.
The documentation on this is very poor, and depending on which source I look at, they use different syntax and don't explain why they do things. Like, here's just one example from a CFT:
Condition:
StringLike:
'kms:EncryptionContext:aws:cloudtrail:arn': !Sub 'arn:aws:cloudtrail:*:${AWS::AccountId}:trail/*'
OK, but what if I need to connect up EventBridge and CloudWatch Logs, too? No example, no mention of it, as if this use case doesn't exist.
If I omit the encryption key, everything works fine - but I do need the data encrypted at rest in S3, since it's capturing sensitive operations in my master payer account.
Is there any shorthand for this in CDK, or is there an example in CFT (or even outside of IaC tools entirely) of the proper key policy to use in this scenario?
I tried variations on mykey.grant_decrypt(trail.log_group), mykey.grant_encrypt_decrypt(trail), mykey.grant_decrypt(rule), etc. and all of them throw an inscrutable stack trace saying something is undefined, so apparently those methods just don't work.

Azure Copy Activity Rest Results Unexpected

I'm attempting to pull data from the Square Connect v1 API using ADF. I'm utilizing a Copy Activity with a REST source. I am successfully pulling back data, however, the results are unexpected.
The endpoint is /v1/{location_id}/payments. I have three parameters, shown below.
I can successfully pull this data via Postman.
The results are stored in a Blob and are as if I did not specify any parameters whatsoever.
Only when I hardcode the parameters into the relative path
do I get correct results.
I feel I must be missing a setting somewhere, but which one?
You can try setting the values you want into a setVariable activity, and then have your copyActivity reference those variables. This will tell you whether it is an issue with the dynamic content or not. I have run into some unexpected behavior myself. The benefit of the intermediate setVariable activity is twofold. Firstly it coerces the datatype, secondly, it lets you see what the value is.
My apologies for not using comments. I do not yet have enough points to comment.

How to securize an entitie on Sails?

I'm developing an API with Sails, and now I need to securize some variables from an entity. Those variable will be accesed only from Admin or own user.
I have an structure like this:
Employee (contains your employee records)
fullName
hourlyWage
phoneNumber
accountBank
Location (contains a record for each location you operate)
streetAddress
city
state
zipcode
...
I need to encrypt phonenumber and accountbank, to avoid anyone to see the values of this fields in the DataBase. Only the owner or the admin.
How I can do that? Thanks
You are looking for a way to encrypt data so that people with no required access right could not see it.
The solution for that is not Sails.js specific and Node actually comes with tools to encrypt data :https://nodejs.org/api/crypto.html.
The key rule here is to always keep your secret password safe.
As for integration in your Sails.js application, I would use callbacks in Models. The official documentation provides a good example here : http://sailsjs.org/documentation/concepts/models-and-orm/lifecycle-callbacks
Basically you just define a function that will be called each time the record is about to be created, fetched or updated. You can then apply your encrypt/decrypt functions there.
This will encrypt/decrypt your phone numbers and bank account numbers automatically.
Regarding access control, you can use Sails' policies along with authentication to determine if the client has the right to access the resource. If not you can always remove attributes from the response sent back to the client.

Add metric name in OTSDB via API

I am adding data into OTSDB from different sources. But i give metric name for each data points using XML file. Also i dont have any access to OTSDB to create Metric Name via terminal
I have reffered below links :-
API PUT
GitHub Issue
In gitHub issue, i couldn't understand how to use --auto-metirc .
I know how to create metric using Terminal :-
Here i am creating abxcs metirc using terminal.
./tsdb mkmetric abxcs
But How to create metric using API?
FYI :- Please suggest solution using JAVA
Thanks for help in advance.
In order to have metric names auto created on-the-fly, you'll need to set
tsd.core.auto_create_metrics = true
in the OpenTSDB configuration file. Ref: http://opentsdb.net/docs/build/html/user_guide/configuration.html
Whether or not a data point with a new metric will assign a UID to the metric. When false, a data point with a metric that is not in the database will be rejected and an exception will be thrown.
CLI equivalent of it is to pass --auto-metric switch while starting tsd process.

Copying data from S3 to Redshift

I feel like this should be a lot easier than it's been on me.
copy table
from 's3://s3-us-west-1.amazonaws.com/bucketname/filename.csv'
CREDENTIALS 'aws_access_key_id=my-access;aws_secret_access_key=my-secret'
REGION 'us-west-1';
Note I added the REGION section after having a problem but did nothing.
Where I am confused though is that in the bucket properties there is only the https://path/to/the/file.csv. I can only assume that all the documentation that I have read calling for the path to start with s3://... that I would just change https to s3 like shown in my example.
However I get this error:
"Error : ERROR: S3ServiceException:
The bucket you are attempting to access must be addressed using the specified endpoint.
Please send all future requests to this endpoint.,Status 301,Error PermanentRedirect,Rid"
I am using navicat for PostgreSQL to connect to Redshift and Im running on a mac.
The S3 path should be 's3://bucketname/filename.csv'. Try this.
Yes, It should be a lot easier :-)
I have only seen this error when your S3 bucket is not in US Standard. In such cases you need to use endpoint based address e.g. http://s3-eu-west-1.amazonaws.com/mybucket/myfile.txt.
You can find the endpoints for your region in this documentation page,
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region