How does one specify multiple schema with different ports? Specifically, I want to have HTTP on port 81 and HTTPS on port 444.
swagger: '2.0'
info:
version: 1.0.0
title: API for gateways
description: API for gateways to access server (port 81 for http and 444 for https)
schemes:
- http
- https
host: gateway.example.com:81
basePath: /1.0
paths:
This is possible in OpenAPI 3.0, but not in OpenAPI/Swagger 2.0.
openapi: 3.0.0
servers:
- url: 'http://gateway.example.com:81'
- url: 'https://gateway.example.com:444'
Related
I feel like this should be a simple thing, to point an istio virtual service to a knative service, that gets stood up as an ExternalName type.
- match:
- authority:
prefix: demo.example.io
gateways:
- knative-serving/knative-ingress-gateway
- cert-manager/secure-ingress-gateway
uri:
prefix: /api
retries:
attempts: 5
perTryTimeout: 15s
retryOn: gateway-error,connect-failure,refused-stream
route:
- destination:
host: proxy.default.svc.local
port:
number: 80
This throws a 404. If I use a regular service it works fine. Or if I point to the first revision proxy-0001.default.svc... it works fine.
I am using spring config server with 2 backends : git and vault (for secrets), and i have a clients apps that connect to the config server to get distant configuration (git and vault).
I have this configuration:
config server
server:
port: 8888
spring:
profiles:
active: git, vault
cloud:
config:
server:
vault:
host: hostName
kvVersion: 1
order: 1
backend: secret/cad
scheme: https
port: 443
git:
order: 2
uri: git#gitlab.git_repo
ignoreLocalSshSettings: true
force-pull: true
deleteUntrackedBranches: true
privateKey: key
and client side
spring:
application:
name: my_app_name
cloud:
vault:
config:
uri: http://localhost:8888
token: s.token
fail-fast: true
With this way I have to change the token for every client every day (token expire 24h). Is there a way to renew the token with this configuration or there is another way to authenticate to the vault?
spring.cloud.vault:
config.lifecycle:
enabled: true
min-renewal: 10s
expiry-threshold: 1440m
lease-endpoints: Legacy
1440 minutes = 24h
Reference: https://cloud.spring.io/spring-cloud-vault/reference/html/#vault-lease-renewal
We are using Kubernetes with Istio and have configured a virtual service:
http:
- match:
- uri:
prefix: /api
rewrite:
uri: /api
route:
- destination:
host: svc-api
port:
number: 80
subset: stable
weight: 0
- destination:
host: svc-api
port:
number: 80
subset: vnext
weight: 100
corsPolicy: &apiCorsPolicy
allowOrigins:
- exact: '*'
allowMethods:
- GET
- POST
- OPTIONS
allowCredentials: true
allowHeaders:
- "*"
- match:
- uri:
prefix: /
rewrite:
uri: /
route:
- destination:
host: svc-api
port:
number: 80
subset: stable
weight: 0
- destination:
host: svc-api
port:
number: 80
subset: vnext
weight: 100
corsPolicy: *apiCorsPolicy
Making a request to https://[host]/api/* from a browser fails on OPTIONS request with 'from origin [request] has been blocked by CORS policy'.
Describing the service in k8 shows the correct configurations.
According to v 1.6.4 the structure using allowOrigins and exact instead of allowOrigin is correct.
What am I missing here?
Few things worth to mention here.
Recently in older istio versions there were a problem with cors and jwt combined together, take a look at below links:
Istio 1.5 cors not working - Response to preflight request doesn't pass access control check
https://github.com/istio/istio/issues/16171
There is another github issue about that, in this comment community member have the same issue, maybe it's worth to open a new github issue for that?
Additionally there is an answer from istio dev #howardjohn
Hi everyone. Testing CORS using curl can be a bit misleading. CORS is not enforced at the server side; it will not return a 4xx error for example. Instead, headers are returned back which are used by browsers to deny/accept. https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/cors.html?highlight=cors gives a good demo of this, and https://www.sohamkamani.com/blog/2016/12/21/web-security-cors/#:~:text=CORS%20isn't%20actually%20enforced,header%20in%20all%20its%20responses. is a good explanation.
So Istio's job here is simply to return these headers. I have added a test showing this works: #26231.
As I mentioned in comment worth to take a look at another community member configuration here, as he has a working options example, but have a 403 issue with POST.
Few things I would change in your virtual service
Use just corsPolicy instead of corsPolicy: &apiCorsPolicy
corsPolicy:
Instead of exact you can also use regex which may do the trick for wildcards.
allowOrigins:
- regex: '*'
About the / and /api paths, I think prefix is enough here, you don't need rewrite.
I've followed the google cloud endpoints with Google Kubernetes engine tutorial here: https://cloud.google.com/endpoints/docs/openapi/get-started-kubernetes-engine
using my own docker image. The Kubernetes part works fine and I can access my services via the loadbalancer IP.
However, when I try to put my service behind cloud endpoints in order to secure it, the endpoint remains public and can be accessed without an API key. Here is my openapi.yaml, deployed with gcloud endpoints services deploy openapi.yaml:
swagger: "2.0"
info:
description: "A test."
title: "API test"
version: "1.0.0"
host: "<test-api.endpoints.MY-PROJECT-ID.cloud.goog>"
x-google-endpoints:
- name: "<test-api.endpoints.MY-PROJECT-ID.cloud.goog>"
target: "<MY LOADBALANCER IP>"
#require an API key to access project
security:
- api_key: []
paths:
/:
get:
summary: Return django REST default page
description: test
operationId: test
responses:
"200":
description: OK
securityDefinitions:
# This section configures basic authentication with an API key.
api_key:
type: "apiKey"
name: "key"
in: "query"
When I try to access my service via the value for host (obscured because it is still open), it is still open and does not require an API key . Nothing is shown in the cloud endpoints logs either. As far as I understand, the configuration of openapi.yaml alone should be enough to restrict access?
Following this page (https://swagger.io/docs/specification/2-0/authentication/api-keys/), your security is misplaced. It should looks like this:
swagger: "2.0"
info:
description: "A test."
title: "API test"
version: "1.0.0"
host: "<test-api.endpoints.MY-PROJECT-ID.cloud.goog>"
x-google-endpoints:
- name: "<test-api.endpoints.MY-PROJECT-ID.cloud.goog>"
target: "<MY LOADBALANCER IP>"
#require an API key to access project
paths:
/:
get:
security:
- api_key: []
summary: Return django REST default page
description: test
operationId: test
responses:
"200":
description: OK
securityDefinitions:
# This section configures basic authentication with an API key.
api_key:
type: "apiKey"
name: "key"
in: "query"
Edit:
In case you want the api-key in the header, you have to change the definition but you can leave security in the place you declared:
swagger: "2.0"
info:
description: "A test."
title: "API test"
version: "1.0.0"
host: "<test-api.endpoints.MY-PROJECT-ID.cloud.goog>"
x-google-endpoints:
- name: "<test-api.endpoints.MY-PROJECT-ID.cloud.goog>"
target: "<MY LOADBALANCER IP>"
#require an API key to access project
security:
- api_key: []
paths:
/:
get:
summary: Return django REST default page
description: test
operationId: test
responses:
"200":
description: OK
securityDefinitions:
# This section configures basic authentication with an API key.
api_key:
type: "apiKey"
name: "key"
in: "header"
I didn't understand which version you preferred, so I adjusted both.
I am trying to use serverless-offline to develop / simulate my API Gateway locally. My API gateway makes liberal use of the HTTP proxy integrations. The production Resource looks like this:
I have created a serverless-offline configuration based on a few documents and discussion which say that it is possible to define an HTTP Proxy integration using Cloud Formation configuration:
httpProxyWithApiGateway.md - Setting an HTTP Proxy on API Gateway by using Serverless framework.
Setting an HTTP Proxy on API Gateway (official Serverless docs: API Gateway)
I have adapted the above two configuration examples for my purposes, see below.
Have any tips, for what I might be doing wrong here?
plugins:
- serverless-offline
service: company-apig
provider:
name: aws
stage: dev
runtime: python2.7
resources:
Resources:
# Parent APIG RestApi
ApiGatewayRestApi:
Type: AWS::ApiGateway::RestApi
Properties:
Name: company-apig
Description: 'The main entry point of the APIG'
# Resource /endpoint
EndpointResource:
Type: AWS::ApiGateway::Resource
Properties:
ParentId:
Fn::GetAtt:
- ApiGatewayRestApi
- RootResourceId
PathPart: 'endpoint'
RestApiId:
Ref: ApiGatewayRestApi
# Resource /endpoint/{proxy+}
EndpointProxyPath:
Type: AWS::ApiGateway::Resource
Properties:
ParentId:
Ref: EndpointResource
PathPart: '{proxy+}'
RestApiId:
Ref: ApiGatewayRestApi
# Method ANY /endpoint/{proxy+}
EndpointProxyAnyMethod:
Type: AWS::ApiGateway::Method
Properties:
AuthorizationType: NONE
HttpMethod: ANY
Integration:
IntegrationHttpMethod: ANY
Type: HTTP_PROXY
Uri: http://endpoint.company.cool/{proxy}
PassthroughBehavior: WHEN_NO_MATCH
MethodResponses:
- StatusCode: 200
ResourceId:
Ref: EndpointProxyPath
RestApiId:
Ref: ApiGatewayRestApi
For the above configuration, I get this output. Apparently, the configuration registers no routes at all.
{
"statusCode":404,
"error":"Serverless-offline: route not found.",
"currentRoute":"get - /endpoint/ping",
"existingRoutes":[]
}
Related: I am also attempting to solve the same problem using aws-sam, at the following post - API Gateway HTTP Proxy integration with aws-sam (NOT Lambda Proxy)
By default serverless-offline doesn't parse your resources for endpoints, enable it via custom config.
custom:
serverless-offline:
resourceRoutes: true
Ends up serving:
Serverless: Routes defined in resources:
Serverless: ANY /endpoint/{proxy*} -> http://endpoint.company.cool/{proxy}
Serverless: Offline listening on http://localhost:3000
Documentation