Serverless: create api key from SecretsManager value - aws-api-gateway

I have a Serverless stack deploying an API to AWS. I want to protect it using an API key stored in Secrets manager. The idea is to have the value of the key in SSM, pull it on deploy and use it as my API key.
serverless.yml
service: my-app
frameworkVersion: '2'
provider:
name: aws
runtime: nodejs12.x
...
apiKeys:
- name: apikey
value: ${ssm:myapp-api-key}
As far as I can tell, the deployed API Gateway key should be the same as the SSM Secret, yet when I look in the console, the 2 values are different. What am I overlooking? No error messages either.

I ran into the same problem a while ago and I resorted to using the serverless-add-api-key plugin as it was not comprehensible for me when Serverless was creating or reusing new API keys for API Gateway.
With this plugin your serverless.yml would look something like this:
service: my-app
frameworkVersion: '2'
plugins:
- serverless-add-api-key
custom:
apiKeys:
- name: apikey
value: ${ssm:myapp-api-key}
functions:
your-function:
runtime: ...
handler: ...
name: ...
events:
- http:
...
private: true
You can also use a stage-specific configuration:
custom:
apiKeys:
dev:
- name: apikey
value: ${ssm:myapp-api-key}

This worked well for me:
custom:
apiKeys:
- name: apikey
value: ${ssm:/aws/reference/secretsmanager/dev/user-api/api-key}
deleteAtRemoval: false # Retain key after stack removal
functions:
getUserById:
handler: src/handlers/user/by-id.handler
events:
- http:
path: user/{id}
method: get
cors: true
private: true

Related

Datasource config for Azure Monitor datasource in kube-prometheus-stack

I'm trying to figure out how to configure an Azure Monitor datasource for Grafana.
What works so far is that the datasource is listed in Grafana when I deploy the stack via HELM.
This is the respective config from my values.yml:
grafana:
additionalDataSources:
- name: Azure Monitor
type: grafana-azure-monitor-datasource
version: 1
id: 2
orgId: 1
typeLogoUrl: public/app/plugins/datasource/grafana-azure-monitor-datasource/img/logo.jpg
url: /api/datasources/proxy/2
access: proxy
isDefault: false
readOnly: false
editable: true
jsonData:
timeInterval: 30s
azureLogAnalyticsSameAs: true
cloudName: azuremonitor
clientId: $GF_AZURE_CLIENT_ID
tenantId: $GF_AZURE_TENANT_ID
subscriptionId: $GF_AZURE_SUBSCRIPTION_ID
Now, everytime grafana restarts, I'd need to set the client secret again.
Is there any way to configure it directly for the startup of Grafana, as well as the Default subscription being used?
I finally found the missing key:
grafana:
additionalDataSources:
- name: Azure Monitor
...
jsonData:
...
secureJsonData: # the missing piece
clientSecret: $GF_AZURE_CLIENT_SECRET
The client secret has to be passed via secureJsonData.

Problem when deploying a SageMaker Multi-Model Endpoints with AWS CDK/CloudFormation

I am trying to automate the deployment of a SageMaker multi-model endpoints with AWS CDK using Python language (I guess it would be the same by directly writing a CloudFormation template in json/yaml format), but when trying to deploy it, error occurs at the creation of the SageMaker model.
Here is part of the CloudFormation template made with the cdk synth command:
Resources:
smmodelexecutionrole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: sts:AssumeRole
Effect: Allow
Principal:
Service: sagemaker.amazonaws.com
Version: "2012-10-17"
Policies:
- PolicyDocument:
Statement:
- Action: s3:GetObject
Effect: Allow
Resource:
Fn::Join:
- ""
- - "arn:"
- Ref: AWS::Partition
- :s3:::<bucket_name>/deploy_multi_model_artifact/*
Version: "2012-10-17"
PolicyName: policy_s3
- PolicyDocument:
Statement:
- Action: ecr:*
Effect: Allow
Resource:
Fn::Join:
- ""
- - "arn:"
- Ref: AWS::Partition
- ":ecr:"
- Ref: AWS::Region
- ":"
- Ref: AWS::AccountId
- :repository/<my_ecr_repository>
Version: "2012-10-17"
PolicyName: policy_ecr
Metadata:
aws:cdk:path: <omitted>
smmodel:
Type: AWS::SageMaker::Model
Properties:
ExecutionRoleArn:
Fn::GetAtt:
- smmodelexecutionrole
- Arn
Containers:
- Image: xxxxxxxxxxxx.dkr.ecr.<my_aws_region>.amazonaws.com/<my_ecr_repository>/multi-model:latest
Mode: MultiModel
ModelDataUrl: s3://<bucket_name>/deploy_multi_model_artifact/
ModelName: MyModel
Metadata:
aws:cdk:path: <omitted>
When running cdk deploy on the Terminal, the following error occur:
3/6 | 7:56:58 PM | CREATE_FAILED | AWS::SageMaker::Model | sm_model (smmodel)
Could not access model data at s3://<bucket_name>/deploy_multi_model_artifact/.
Please ensure that the role "arn:aws:iam::xxxxxxxxxxxx:role/<my_role>" exists
and that its trust relationship policy allows the action "sts:AssumeRole" for the service principal "sagemaker.amazonaws.com".
Also ensure that the role has "s3:GetObject" permissions and that the object is located in <my_aws_region>.
(Service: AmazonSageMaker; Status Code: 400; Error Code: ValidationException; Request ID: xxxxx)
What I have:
An ECR repository containing the docker image
A S3 bucket containing the model artifacts (.tar.gz files) inside the "folder" "deploy_multi_model_artifact"
To test if it is a IAM role issue, I tried to replace MultiModel by SingleModel and replace s3://<bucket_name>/deploy_multi_model_artifact/ with s3://<bucket_name>/deploy_multi_model_artifact/one_of_my_artifacts.tar.gz, and I could create successfully the model. I am then guessing that it is not a problem related with the IAM contrary to what the error message tells me (but I may make a mistake!) as it seems .
So I am wondering where the problem comes from. This is even more confusing as I have already deployed this multi-model endpoints using boto3 without problem.
Any help would be greatly appreciated !!
(About Multi-Model Endpoints deployment: https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/multi_model_xgboost_home_value/xgboost_multi_model_endpoint_home_value.ipynb)
Problem was that I forgot to add SageMaker access permissions to the IAM role.
I can deploy the multi-model endpoints by adding the SageMaker FullAccess managed policy to the IAM role.

How to wait until env for appid is created in jelastic manifest installation?

I have the following manifest:
jpsVersion: 1.3
jpsType: install
application:
id: shopozor-k8s-cluster
name: Shopozor k8s cluster
version: 0.0
baseUrl: https://raw.githubusercontent.com/shopozor/services/dev
settings:
fields:
- name: envName
caption: Env Name
type: string
default: shopozor
- name: topo
type: radio-fieldset
values:
0-dev: '<b>Development:</b> one master (1) and one scalable worker (1+)'
1-prod: '<b>Production:</b> multi master (3) with API balancers (2+) and scalable workers (2+)'
default: 0-dev
- name: version
type: string
caption: Version
default: v1.16.3
onInstall:
- installKubernetes
- enableSubDomains
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.version}/manifest.jps
envName: ${settings.envName}
displayName: ${settings.envName}
settings:
deploy: cmd
cmd: |-
curl -fsSL ${baseUrl}/scripts/install_k8s.sh | /bin/bash
topo: ${settings.topo}
dashboard: version2
ingress-controller: Nginx
storage: true
api: true
monitoring: true
version: ${settings.version}
jaeger: false
enableSubDomains:
- jelastic.env.binder.AddDomains[cp]:
domains: staging,api-staging,assets-staging,api,assets
Unfortunately, when I run that manifest, the k8s cluster gets installed, but the subdomains cannot be created (yet), because:
[15:26:28 Shopozor.cluster:3]: enableSubDomains: {"action":"enableSubDomains","params":{}}
[15:26:29 Shopozor.cluster:4]: api [cp]: {"method":"jelastic.env.binder.AddDomains","params":{"domains":"staging,api-staging,assets-staging,api,assets"},"nodeGroup":"cp"}
[15:26:29 Shopozor.cluster:4]: ERROR: api.response: {"result":2303,"source":"JEL","error":"env for appid [5ce25f5a6988fbbaf34999b08dd1d47c] not created."}
What jelastic API methods can I use to perform the necessary waiting until subdomain creation is possible?
My current workaround is to split that manifest into two manifests: one cluster installation manifest and one update manifest creating the subdomains. However, I'd like to have everything in the same manifest.
Please change this:
enableSubDomains:
- jelastic.env.binder.AddDomains[cp]:
domains: staging,api-staging,assets-staging,api,assets
to:
enableSubDomains:
- jelastic.env.binder.AddDomains[cp]:
envName: ${settings.envName}
domains: staging,api-staging,assets-staging,api,assets

API Gateway HTTP Proxy integration with serverless-offline (NOT Lambda Proxy)

I am trying to use serverless-offline to develop / simulate my API Gateway locally. My API gateway makes liberal use of the HTTP proxy integrations. The production Resource looks like this:
I have created a serverless-offline configuration based on a few documents and discussion which say that it is possible to define an HTTP Proxy integration using Cloud Formation configuration:
httpProxyWithApiGateway.md - Setting an HTTP Proxy on API Gateway by using Serverless framework.
Setting an HTTP Proxy on API Gateway (official Serverless docs: API Gateway)
I have adapted the above two configuration examples for my purposes, see below.
Have any tips, for what I might be doing wrong here?
plugins:
- serverless-offline
service: company-apig
provider:
name: aws
stage: dev
runtime: python2.7
resources:
Resources:
# Parent APIG RestApi
ApiGatewayRestApi:
Type: AWS::ApiGateway::RestApi
Properties:
Name: company-apig
Description: 'The main entry point of the APIG'
# Resource /endpoint
EndpointResource:
Type: AWS::ApiGateway::Resource
Properties:
ParentId:
Fn::GetAtt:
- ApiGatewayRestApi
- RootResourceId
PathPart: 'endpoint'
RestApiId:
Ref: ApiGatewayRestApi
# Resource /endpoint/{proxy+}
EndpointProxyPath:
Type: AWS::ApiGateway::Resource
Properties:
ParentId:
Ref: EndpointResource
PathPart: '{proxy+}'
RestApiId:
Ref: ApiGatewayRestApi
# Method ANY /endpoint/{proxy+}
EndpointProxyAnyMethod:
Type: AWS::ApiGateway::Method
Properties:
AuthorizationType: NONE
HttpMethod: ANY
Integration:
IntegrationHttpMethod: ANY
Type: HTTP_PROXY
Uri: http://endpoint.company.cool/{proxy}
PassthroughBehavior: WHEN_NO_MATCH
MethodResponses:
- StatusCode: 200
ResourceId:
Ref: EndpointProxyPath
RestApiId:
Ref: ApiGatewayRestApi
For the above configuration, I get this output. Apparently, the configuration registers no routes at all.
{
"statusCode":404,
"error":"Serverless-offline: route not found.",
"currentRoute":"get - /endpoint/ping",
"existingRoutes":[]
}
Related: I am also attempting to solve the same problem using aws-sam, at the following post - API Gateway HTTP Proxy integration with aws-sam (NOT Lambda Proxy)
By default serverless-offline doesn't parse your resources for endpoints, enable it via custom config.
custom:
serverless-offline:
resourceRoutes: true
Ends up serving:
Serverless: Routes defined in resources:
Serverless: ANY /endpoint/{proxy*} -> http://endpoint.company.cool/{proxy}
Serverless: Offline listening on http://localhost:3000
Documentation

Cannot access restApiId & restApiRootResourceId for cross stack reference in serverless yml

Since I had an issue of 200 resource error, I found a way of using cross stack reference by dividing into different services. I managed to do that by using the cross-stack reference. The issue is I cannot give the restApiId & restApiRootResourceId dynamically. Right now, am statically setting ids into the service-2.
Basically the service-1 looks like,
provider:
name: aws
runtime: nodejs8.10
apiGateway:
restApiId:
Ref: ApiGatewayRestApi
restApiResources:
Fn::GetAtt:
- ApiGatewayRestApi
- RootResourceId
custom:
stage: "${opt:stage, self:provider.stage}"
resources:
Resources:
ApiGatewayRestApi:
Type: AWS::ApiGateway::RestApi
Properties:
Name: ${self:service}-${self:custom.stage}-1
Outputs:
ApiGatewayRestApiId:
Value:
Ref: ApiGatewayRestApi
Export:
Name: ApiGatewayRestApi-restApiId
ApiGatewayRestApiRootResourceId:
Value:
Fn::GetAtt:
- ApiGatewayRestApi
- RootResourceId
Export:
Name: ApiGatewayRestApi-rootResourceId
And the service-2 looks like this,
provider:
name: aws
runtime: nodejs8.10
apiGateway-shared:
restApiId:
'Fn::ImportValue': ApiGatewayRestApi-restApiId
restApiRootResourceId:
'Fn::ImportValue': ApiGatewayRestApi-rootResourceId
As the above service-2 config, I cannot reference the Ids.
FYI: Both services are in different files.
So How what's wrong with this approach?
Serverless has special syntax on how to access stack output variables: {cf:stackName.outputKey}.
Note that using the Fn::ImportValue would work inside the resources section.