When creating an S3 client like so:
s3 = boto3.client(
's3',
endpoint_url=app.config['S3_HOST'],
aws_access_key_id=app.config['S3_SETTINGS']['accessKey'],
aws_secret_access_key=app.config['S3_SETTINGS']['sharedSecret'],
config=Config(signature_version='s3')
)
and then trying to upload a file like so:
s3.upload_fileobj(Fileobj=_file,
Bucket=app.config['S3_BUCKET'],
Key=file_hash,
ExtraArgs={
"Metadata": {
"file_name": file_name
}
})
it throws "Error: The request signature we calculated does not match the signature you provided. Check your key and signing method."
s3v4 does not seem to work either. What signature version should I use?
Related
I am trying to delete a route request parameter in apigatewayv2, aws docs requires the following arguments to fulfill deletion request:
apigatewayv2 delete-route-request-parameter
--api-id <value>
--request-parameter-key <value>
--route-id <value>
I am not sure what AWS meant with request-parameter-key, I've tried the following with no success.
aws apigatewayv2 delete-route-request-parameter --api-id red6c408c5 --route-id i09lhet --request-parameter-key '"integration.request.header.authorization": "route.request.body.payload.authorization"'
I got:
An error occurred (NotFoundException) when calling the DeleteRouteRequestParameter operation: Invalid request parameter specified
request parameter key I would like to delete is first in the RequestParameters object from the following :
{
"ConnectionType": "INTERNET",
"ContentHandlingStrategy": "CONVERT_TO_TEXT",
"IntegrationId": "40w4rqd",
"IntegrationMethod": "POST",
"IntegrationType": "HTTP_PROXY",
"IntegrationUri": "https://***/chat/api/v1/live/sync-contacts/results",
"PassthroughBehavior": "WHEN_NO_MATCH",
"PayloadFormatVersion": "1.0",
"RequestParameters": {
//=>> deleting this "integration.request.header.cognito_token": "route.request.body.payload.authorization",
"integration.request.header.domainName": "context.domainName",
"integration.request.header.connectionid": "context.connectionId"
},
"TimeoutInMillis": 29000
}
is there a specific way to represent particular request parameter to be deleted?
I'm trying to deploy PostgreSQL managed service with bicep and in most cases get an error:
"code": "InvalidParameterValue",
"message": "Invalid value given for parameter databaseName. Specify a valid parameter value."
I've tried various names for the DB, even in last version of the script I add random suffix to made it unique. Anyway it finishes with error, but looks like service is working. Another unexplainable thing is that sometimes script finishes without error... It's part of my IaC scenario, i need to be able to rerun it many times...
bicep code:
param location string
#secure()
param sqlserverLoginPassword string
param rand string = uniqueString(resourceGroup().id) // Generate unique String
param sqlserverName string = toLower('invivopsql-${rand}')
param sqlserverAdminName string = 'invivoadmin'
param psqlDatabaseName string = 'postgres'
resource flexibleServer 'Microsoft.DBforPostgreSQL/flexibleServers#2021-06-01' = {
name: sqlserverName
location: location
sku: {
name: 'Standard_B1ms'
tier: 'Burstable'
}
properties: {
createMode: 'Default'
version: '13'
administratorLogin: sqlserverAdminName
administratorLoginPassword: sqlserverLoginPassword
availabilityZone: '1'
storage: {
storageSizeGB: 32
}
backup: {
backupRetentionDays: 7
geoRedundantBackup: 'Disabled'
}
}
}
Please follow this git issue here for a similar error that might help you to fix your problem.
I'm trying to set up HTTP integration in AWS API Gateway v2 (aka HTTP API). In my config, I have a native JWT authorizer and want to append one namespaced JWT access_token claims to HTTP request headers.
As long as claims as simple name such as sub or iss this is working fine with the following mapping syntax:
append:header.simple = append:$context.authorizer.claims.simple
However, some of my claims are namespace with an https://namespace/ prefix (is a requirement from Auth0 and cannot be changed). This is where mapper syntax is falling short for me.
Say my input JWT is like this:
{
"aud": "my.dev.api",
"azp": "CCCC",
"exp": "1610606942",
"https://my.ns/account_no": "100368421",
"iat": "1610598342",
"iss": "https://mytenant.auth0.com/",
"scope": "openid profile email account:admin",
"sub": "auth0|user-id"
}
How can I map namespaced claim https://my.ns/account_no?
I tried $context.authorizer.claims['https://my.ns/account_no'] with no luck. Here is the terraform setup I use:
resource "aws_apigatewayv2_integration" "root" {
api_id = aws_apigatewayv2_api.api.id
integration_type = "HTTP_PROXY"
connection_type = "INTERNET"
description = "This is our GET / integration"
integration_method = "GET"
integration_uri = "http://${aws_lb.ecs_lb.dns_name}"
passthrough_behavior = "WHEN_NO_MATCH"
request_parameters = {
"append:header.account_no" = "$context.authorizer.claims['https://my.ns/account_no']" <-- FAILING HERE
}
}
Error I'm getting in terraform and dashboard is the same:
Invalid mapping expression specified: Validation Result: warnings : [], errors : [Invalid mapping expression specified: $context.authorizer.claims["https://my.ns/account_no"]]
Thanks for your assistance.
I am new to K6 and is trying to use the tool to perform a Get request by verifying an API.
When the script is executed I get a warning that terminates the scrip. As far as my understanding is that this error is somewhat related to Go (if I have understood it correctly).
The result that I want to achieve is to be able to execute the Get request to the endpoint URL, but would appreciate any kind of feedback if I have done any incorrectly or should try an other approach.
Script:
import http from "k6/http";
import { check } from "k6";
export default function () {
var url =
"https://endpoint.example.to.cloud/api/reports/v1/SMOKETESTC6KP6NWX";
var headerParam = {
headers: {
"Content-Type": "application/json",
},
};
const response = http.get(url, headerParam);
check(response, {
"Response status reciving a 200 response ": (r) => r.status === 200,
});
let body = JSON.parse(response.body);
}
Output:
WARN[0000] Request Failed error="Get \"https://endpoint.example.to.cloud/api/reports/v1/SMOKETESTC6KP6NWX\": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0"
Changing URL endpoint:
If i change the URL endpoint (mockup url) like below, there will be no errors:
...
var url = "https://run.mocky.io/v3/16fa8113-57e0-4e47-99b9-b5c55da93d71";
...
Updated solution to run this locally:
In order to run this locally i had to add the certification and key:
Example:
export let options = {
...
tlsAuth: [
{
cert: open(`${__ENV.Certificate}`),
key: open(`${__ENV.Key}`),
},
],
};
In addition populate the execute command with --insecure-skip-tls-verify
Example:
k6 run -e Certificate=/home/cert/example_certification.crt -e Key=/home/cert/certification/example_key.key -e example.js --insecure-skip-tls-verify
k6 is written in Go, and the latest versions of Go have a breaking change in how they handle X.509 certificates: https://golang.org/doc/go1.15#commonname
As it says in the error message, you can temporarily allow the old behavior by setting a GODEBUG=x509ignoreCN=0 environment variable, but that will likely stop working in a few months with Go 1.17. Using the insecureSkipTLSVerify k6 option might also work, I haven't checked, but as the name implies, that stops any TLS verification and is insecure.
So the real solution is to re-generate your server-side certificate properly.
I am trying to pass firebase environment variables for deployment with now.
I have encoded these variables manually with base64 and added them to now with the following command:
now secrets add firebase_api_key_dev "mybase64string"
The encoded string was placed within speech marks ""
These are in my CLI tool and I can see them all using the list command:
now secrets ls
> 7 secrets found under project-name [499ms]
name created
firebase_api_key_dev 6d ago
firebase_auth_domain_dev 6d ago
...
In my firebase config, I am using the following code:
const config = {
apiKey: Buffer.from(process.env.FIREBASE_API_KEY, "base64").toString(),
authDomain: Buffer.from(process.env.FIREBASE_AUTH_DOMAIN,"base64").toString(),
...
}
In my now.json file I have the following code:
{
"env": {
"FIREBASE_API_KEY": "#firebase_api_key_dev",
"FIREBASE_AUTH_DOMAIN": "#firebase_auth_domain_dev",
...
}
}
Everything works fine in my local environment (when I run next) as I also have a .env file with these variables, yet when I deploy my code, I get the following error in my now console:
TypeError [ERR_INVALID_ARG_TYPE]: The first argument must be one of type string, Buffer, ArrayBuffer, Array, or Array-like Object. Received type undefined
Does this indicate that my environment variables are not being read? What's the issue here? It looks like they don't exist at all
The solution was to replace my existing now.json with:
{
"build":{
"env": {
"FIREBASE_API_KEY": "#firebase_api_key",
"FIREBASE_AUTH_DOMAIN": "#firebase_auth_domain",
"FIREBASE_DATABASE_URL": "#firebase_database_url",
"FIREBASE_PROJECT_ID": "#firebase_project_id",
"FIREBASE_STORAGE_BUCKET": "#firebase_storage_bucket",
"FIREBASE_MESSAGING_SENDER_ID": "#firebase_messaging_sender_id",
"FIREBASE_APP_ID": "#firebase_app_id",
"FIREBASE_API_KEY_DEV": "#firebase_api_key_dev",
"FIREBASE_AUTH_DOMAIN_DEV": "#firebase_auth_domain_dev",
"FIREBASE_DATABASE_URL_DEV": "#firebase_database_url_dev",
"FIREBASE_PROJECT_ID_DEV": "#firebase_project_id_dev",
"FIREBASE_STORAGE_BUCKET_DEV": "#firebase_storage_bucket_dev",
"FIREBASE_MESSAGING_SENDER_ID_DEV": "#firebase_messaging_sender_id_dev",
"FIREBASE_APP_ID_DEV": "#firebase_app_id_dev"
}
}
}
I was missing the build header.
I had to contact ZEIT support to help me identify this issue.