Swagger Open API security Schema giving Object error - openapi

I am having trouble in setting security schema in open api swagger spec in yaml.
I get below error while setting security schemas:
in paths I did use Bearerauth but still same issue:
paths:
/v1/items:
get:
tags:
- Item Resources
summary: searches items
security:
- BearerAuth: [adsfdf]
operationId: searchItems
description: |
Any suggestion on how to fix this issue, or there is any issue with implementation ?

Your global security definition is indented. Global security is defined at the top level, not inside the auth type or component definitions.
Also, in your path usage, you've defined a scope adsfdf. Security scopes do not work with Bearer Authentication - this format is present in OpenAPI for the purpose of OAuth. For more details, see Swagger's documentation.
security:
- bearerAuth: [] # use the same name as above

Related

Grafana : How to use JWT authentication?

I want to use JWT for Grafana login authentication, Grafana docs dictate some steps for the same but [auth.jwt] default is not provided in sample.ini, and can you clarify what it means by header name that contains a token in the step mentioned for enabling JWT ?
This is the header providing a jwt payload from the proxy in front of grafana - in the case of IAP(https://cloud.google.com/iap/docs/signed-headers-howto) for example: x-goog-iap-jwt-assertion. The contents of this header is validated by the use of either source speficied i jwk_set_url, jwk_set_file or key_file, after which the claims for example for username and email can be fetched. Examples on IAP for this also available in the url above:
auth.jwt:
enabled: true
header_name: x-goog-iap-jwt-assertion
username_claim: sub
email_claim: email
jwk_set_url: https://www.gstatic.com/iap/verify/public_key-jwk
Note however - auth.jwt currently "broken by design" as mentioned in:
Authentication Grafana via JWT

/$metadata is not a supported on Azure API Management (AAM) for onboarded OData V4 APIs

I have onboarded my OData V4 APIs on to Azure API Management (gateway) through open API spec 3.0
I have defined a set of OData endpoints available under the spec and I can access the same easily (with or without OData functionalities like $top, $skip, $filter, etc).
However, I'm trying to get /$metadata result and I get "500 internal server error".
I've even tried by adding "/$metadata" as one of the endpoints under 'paths' of spec (same result).
paths:
/$metadata:
get:
summary: getMetadata
description: getMetadata
operationId: getMetadata
responses:
'200':
description: Metadata
I can add "/*" as a path, which would give me a list of entities when I just hit 'https://AAM_Url'<br />
But I don't want to do that as it would accept any junk request like /fgfdgdg and make a call to backend service...
My bad. It worked on adding "/$metadata" in spec as one of the paths.
There was a bug in my outbound policy which was causing the issue. I was trying to read the response onto a JObject, but /$metadata gives response in XML, hence due to conversion error I was getting 500 internal server error as response.

How to add http method into Resource of Keycloak

I want keycloak protect my Restful URL, eg: POST /user/1, DELETE /user/1.
When I create a new resource in Keycloak, I found there are ONLY uris , but no HTTP method exists.
So how can I distinguish between DELETE and POST.
Keycloak Gatekeeper has concept of resources, where you can define also authorization on the request method level, e.g.:
resources:
- uri: /*
- uri: /users/*
methods:
- GET
roles:
- viewer
- uri: /users/*
methods:
- POST
- DELETE
roles:
- editor
There is a possible solution using Authorization Scopes. You can create GET, POST, DELETE authorization scopes and associate them with your resource and then create scope base permission. For example when integrating with Spring Boot you just provide this setting in application.properties:
keycloak.policy-enforcer-config.http-method-as-scope=true

API: sqs:CreateQueue always ACCESS DENIED

I'm trying to create an sqs queue with cloudformation but I keep getting this error in the console.
API: sqs:CreateQueue Access to the resource https://sqs.us-east-1.amazonaws.com/ is denied.
Obviously I'm missing some sort of permission. This guide didn't really specify how I could resolve this.
Here's the code I made:
AWSTemplateFormatVersion: "2010-09-09"
Resources:
MyQueue:
Type: AWS::SQS::Queue
Properties:
FifoQueue: false
QueueName: sqs-test
ReceiveMessageWaitTimeSeconds: 20
RedrivePolicy:
deadLetterTargetArn:
Fn::GetAtt:
- "MyDLQ"
- "Arn"
maxReceiveCount: 4
Tags:
-
Key: "ProjectName"
Value: "project-x"
MyDLQ:
Type: AWS::SQS::Queue
Properties:
FifoQueue: false
QueueName: sqs-dlq-test
I'm trying to understand this doc. But I'm not sure how I could attach a policy to allow creation of queues. Someone please give me a full example.
tyron's comment on your question is spot on. Check permissions of the user executing the CloudFormation. If you're running commands directly, this is usually pretty easy to check. In some cases, you may be working with a more complicated environment with automation.
I find the best way to troubleshoot permissions in an automated world is via CloudTrail. After any API call has failed, whether from the CLI, CloudFormation, or another source, you can look up the call in CloudTrail.
In this case, searching for "Event Name" = "CreateQueue" in the time range of the failure will turn up a result with details like the following:
Source IP Address; this field may say something like cloudformation.amazonaws.com, or the IP of your machine/office. Helpful when you need to filter events based on the source.
User name; In my case, this was the EC2 instance ID of the agent running the CFN template.
Access Key ID; For EC2 instances, this is likely a set of temporary access credentials, but for a real user, it will show you what key was used.
Actual event data; Especially helpful for non-permissions errors, the actual event may show you errors in the request itself.
In my case, the specific EC2 instance that ran automation was out of date and needed to be updated to use the correct IAM Role/Instance Profile. CloudTrail helped me track that down.
If you are using AWS CodePipeline (where you may be using AWS CodeBuild to run & deploy your CloudFormation stack), remember your CodeBuild role (created under IAM Roles) must have the correct permissions.
You can identify which role is being used & attach required policies -
Open CodeBuild Project
Go to Build Details > Environment > Service Role
Open Service Role (hyperlinked)
Add SQS to role policies

How to get the auto generated RestApi from my AWS SAM template? To use in another SAM template

I used AWS SAM to generate my Lambda/APIs. But I want to be able to get this RestApi so I can use it in another SAM template.
The idea is to have 1 base infra CloudFormation/SAM template that creates the network, ALB, API Gateway things
Then each "micro-service" will have its own SAM template and it will create API endpoints referencing this "root" RestApi by specifying the RestApiId attribute
Is this a correct approach? Wonder if when I deploy each service, will it remove the APIs for the other services?
You can access default auto generated RestApi as ServerlessRestApi. This is logical resource id for auto generated RestApi resource.
ServerlessRestApi access example in template.yaml is as follows.
Outputs:
ApiRootURL:
Description: API Root URL
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/${ServerlessRestApi.Stage}"
You can see ServerlessRestApi in the resource list of you CloudFormation stack. ServerlessRestApi is not documented, so it might be changed in the future version.