Increase Spring Cloud Gateway RequestBodySize - spring-cloud

I'm currently getting a 413 Request Entity Too Large when posting something routing through a Spring Cloud Gateway. It works when the request body isn't larger than around 3MB.
Here is my application.yml (Scrubbed)
spring:
profiles:
active: prod
main:
allow-bean-definition-overriding: true
application:
name: my-awesome-gateway
cloud:
gateway:
default-filters:
- DedupeResponseHeader=Access-Control-Allow-Origin Access-Control-Allow-Credentials, RETAIN_UNIQUE
routes:
- id: my-service
uri: https://myservicesdomainname
predicates:
- Path=/service/**
filters:
- StripPrefix=1
- UserInfoFilter
- name: Hystrix
args:
name: fallbackCommand
fallbackUri: forward:/fallback/first
- name: RequestSize
args:
maxSize: 500000000 #***** Here is my attempt to increase the size
httpclient:
connect-timeout: 10000
response-timeout: 20000
This is the link I got RequestSize/args/maxSize from
https://cloud.spring.io/spring-cloud-static/spring-cloud-gateway/2.1.0.RELEASE/multi/multi__gatewayfilter_factories.html#_requestsize_gatewayfilter_factory
Edit:
The issue was with a Kubernetes Ingress Controller. I fixed the issue there and it's now working

it only compares Content-Length request header with specified limit, and rejects right away, i.e. it's not counting uploaded bytes

Related

Kibana - Elastic - Fleet - APM - failed to listen:listen tcp bind: can't assign requested address

Having setup Kibana and a fleet server, I now have attempted to add APM.
When going through the general setup - I forever get an error no matter what is done:
failed to listen:listen tcp *.*.*.*:8200: bind: can't assign requested address
This is when following the steps for setup of APM having created the fleet server.
This is all being launched in Kubernetes and the documentation has been gone through several times to no avail.
We did discover that we can hit the
/intake/v2/events
etc endpoints when shelled into the container but 404 for everything else. Its close but no cigar so far following the instructions.
As it turned out, the general walk through is soon to be depreciated in its current form as is.
And setup is far far simpler in a helm file where its actually possible to configure kibana with package ref for your named apm service.
xpack.fleet.packages:
- name: system
version: latest
- name: elastic_agent
version: latest
- name: fleet_server
version: latest
- name: apm
version: latest
xpack.fleet.agentPolicies:
- name: Fleet Server on ECK policy
id: eck-fleet-server
is_default_fleet_server: true
namespace: default
monitoring_enabled:
- logs
- metrics
unenroll_timeout: 900
package_policies:
- name: fleet_server-1
id: fleet_server-1
package:
name: fleet_server
- name: Elastic Agent on ECK policy
id: eck-agent
namespace: default
monitoring_enabled:
- logs
- metrics
unenroll_timeout: 900
is_default: true
package_policies:
- name: system-1
id: system-1
package:
name: system
- package:
name: apm
name: apm-1
inputs:
- type: apm
enabled: true
vars:
- name: host
value: 0.0.0.0:8200
Making sure these are set in the kibana helm file will allow any spun up fleet server to automatically register as having APM.
The missing key in seemingly all the documentation is the need of a APM service.
The simplest example of which is here:
Example yaml scripts

Envoy proxy is using too much memory

Envoy is using all the memory and the pods are getting evicted. Is there a way to set limit to how much memory envoy proxy can use in the envoy configuration file?
You can probably do that by configuring the overload-manager in the bootstrap configuration for Envoy. Here's a documentation link for more details. It is done simply by adding overload-manager section as follows:
overload_manager:
refresh_interval: 0.25s
resource_monitors:
- name: "envoy.resource_monitors.fixed_heap"
typed_config:
"#type": type.googleapis.com/envoy.extensions.resource_monitors.fixed_heap.v3.FixedHeapConfig
# TODO: Tune for your system.
max_heap_size_bytes: 2147483648 # 2 GiB <==== fix this!
actions:
- name: "envoy.overload_actions.shrink_heap"
triggers:
- name: "envoy.resource_monitors.fixed_heap"
threshold:
value: 0.95
- name: "envoy.overload_actions.stop_accepting_requests"
triggers:
- name: "envoy.resource_monitors.fixed_heap"
threshold:
value: 0.98

How configure properly Grafana Tempo?

I tried to use Grafana Tempo for distributed tracing.
I launch it from docker-compose:
version: "3.9"
services:
# MY MICROSERVICES
...
prometheus:
image: prom/prometheus
ports:
- ${PROMETHEUS_EXTERNAL_PORT}:9090
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:cached
promtail:
image: grafana/promtail
volumes:
- ./log:/var/log
- ./promtail/:/mnt/config
command: -config.file=/mnt/config/promtail-config.yaml
loki:
image: grafana/loki
command: -config.file=/etc/loki/local-config.yaml
tempo:
image: grafana/tempo
command: [ "-config.file=/etc/tempo.yaml" ]
volumes:
- ./tempo/tempo-local.yaml:/etc/tempo.yaml
# - ./tempo-data/:/tmp/tempo
ports:
- "14268" # jaeger ingest
- "3200" # tempo
- "55680" # otlp grpc
- "55681" # otlp http
- "9411" # zipkin
tempo-query:
image: grafana/tempo-query
command: [ "--grpc-storage-plugin.configuration-file=/etc/tempo-query.yaml" ]
volumes:
- ./tempo/tempo-query.yaml:/etc/tempo-query.yaml
ports:
- "16686:16686" # jaeger-ui
depends_on:
- tempo
grafana:
image: grafana/grafana
volumes:
- ./grafana/datasource-config/:/etc/grafana/provisioning/datasources:cached
- ./grafana/dashboards/prometheus.json:/var/lib/grafana/dashboards/prometheus.json:cached
- ./grafana/dashboards/loki.json:/var/lib/grafana/dashboards/loki.json:cached
- ./grafana/dashboards-config/:/etc/grafana/provisioning/dashboards:cached
ports:
- ${GRAFANA_EXTERNAL_PORT}:3000
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
- GF_AUTH_DISABLE_LOGIN_FORM=true
depends_on:
- prometheus
- loki
with tempo-local.yaml:
server:
http_listen_port: 3200
distributor:
receivers: # this configuration will listen on all ports and protocols that tempo is capable of.
jaeger: # the receives all come from the OpenTelemetry collector. more configuration information can
protocols: # be found there: https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver
thrift_http: #
grpc: # for a production deployment you should only enable the receivers you need!
thrift_binary:
thrift_compact:
zipkin:
otlp:
protocols:
http:
grpc:
opencensus:
ingester:
trace_idle_period: 10s # the length of time after a trace has not received spans to consider it complete and flush it
max_block_bytes: 1_000_000 # cut the head block when it hits this size or ...
max_block_duration: 5m # this much time passes
compactor:
compaction:
compaction_window: 1h # blocks in this time window will be compacted together
max_block_bytes: 100_000_000 # maximum size of compacted blocks
block_retention: 1h
compacted_block_retention: 10m
storage:
trace:
backend: local # backend configuration to use
block:
bloom_filter_false_positive: .05 # bloom filter false positive rate. lower values create larger filters but fewer false positives
index_downsample_bytes: 1000 # number of bytes per index record
encoding: zstd # block encoding/compression. options: none, gzip, lz4-64k, lz4-256k, lz4-1M, lz4, snappy, zstd, s2
wal:
path: /tmp/tempo/wal # where to store the the wal locally
encoding: snappy # wal encoding/compression. options: none, gzip, lz4-64k, lz4-256k, lz4-1M, lz4, snappy, zstd, s2
local:
path: /tmp/tempo/blocks
pool:
max_workers: 100 # worker pool determines the number of parallel requests to the object store backend
queue_depth: 10000
tempo-query.yaml:
backend: "tempo:3200"
and datasource.yml for instrumenting datasources on grafana:
apiVersion: 1
deleteDatasources:
- name: Prometheus
- name: Tempo
- name: Loki
datasources:
- name: Prometheus
type: prometheus
access: proxy
orgId: 1
url: http://prometheus:9090
basicAuth: false
isDefault: false
version: 1
editable: false
- name: Tempo
type: tempo
access: proxy
orgId: 1
url: http://tempo-query:16686
basicAuth: false
isDefault: false
version: 1
editable: false
apiVersion: 1
uid: tempo
- name: Tempo-Multitenant
type: tempo
access: proxy
orgId: 1
url: http://tempo-query:16686
basicAuth: false
isDefault: false
version: 1
editable: false
apiVersion: 1
uid: tempo-authed
jsonData:
httpHeaderName1: 'Authorization'
secureJsonData:
httpHeaderValue1: 'Bearer foo-bar-baz'
- name: Loki
type: loki
access: proxy
orgId: 1
url: http://loki:3100
basicAuth: false
isDefault: false
version: 1
editable: false
apiVersion: 1
jsonData:
derivedFields:
- datasourceUid: tempo
matcherRegex: \[.+,(.+),.+\]
name: TraceID
url: $${__value.raw}
But if I test the datasource in grafana, I have this error:
In Loki view I can found the Tempo button for seeing traces... but I can't see it on Tempo because I have an error:
Anyway if I take the trace id and I search it on Jaeger, I can see it correctly.
What I missing in configuration for Tempo? How configure it correctly?
Grafana 7.5 and later can talk to Tempo natively, and no longer need the tempo-query proxy. I think this explains what is happening, Grafana is attempting to use the Tempo-native API against tempo-query, which exposes the Jaeger API instead. Try changing the Grafana datasource in datasource.yml to http://tempo:3200.
This solution applies to the tempo installation via the kubernetes helm charts (so it applies to the question's title but not the exact question):
Use the url http://helmreleasename-tempo:3100 for configuring the tempo datasource in grafana. You need to check your tempo service name in kubernetes and use the port 3100.

API Gateway HTTP Proxy integration with serverless-offline (NOT Lambda Proxy)

I am trying to use serverless-offline to develop / simulate my API Gateway locally. My API gateway makes liberal use of the HTTP proxy integrations. The production Resource looks like this:
I have created a serverless-offline configuration based on a few documents and discussion which say that it is possible to define an HTTP Proxy integration using Cloud Formation configuration:
httpProxyWithApiGateway.md - Setting an HTTP Proxy on API Gateway by using Serverless framework.
Setting an HTTP Proxy on API Gateway (official Serverless docs: API Gateway)
I have adapted the above two configuration examples for my purposes, see below.
Have any tips, for what I might be doing wrong here?
plugins:
- serverless-offline
service: company-apig
provider:
name: aws
stage: dev
runtime: python2.7
resources:
Resources:
# Parent APIG RestApi
ApiGatewayRestApi:
Type: AWS::ApiGateway::RestApi
Properties:
Name: company-apig
Description: 'The main entry point of the APIG'
# Resource /endpoint
EndpointResource:
Type: AWS::ApiGateway::Resource
Properties:
ParentId:
Fn::GetAtt:
- ApiGatewayRestApi
- RootResourceId
PathPart: 'endpoint'
RestApiId:
Ref: ApiGatewayRestApi
# Resource /endpoint/{proxy+}
EndpointProxyPath:
Type: AWS::ApiGateway::Resource
Properties:
ParentId:
Ref: EndpointResource
PathPart: '{proxy+}'
RestApiId:
Ref: ApiGatewayRestApi
# Method ANY /endpoint/{proxy+}
EndpointProxyAnyMethod:
Type: AWS::ApiGateway::Method
Properties:
AuthorizationType: NONE
HttpMethod: ANY
Integration:
IntegrationHttpMethod: ANY
Type: HTTP_PROXY
Uri: http://endpoint.company.cool/{proxy}
PassthroughBehavior: WHEN_NO_MATCH
MethodResponses:
- StatusCode: 200
ResourceId:
Ref: EndpointProxyPath
RestApiId:
Ref: ApiGatewayRestApi
For the above configuration, I get this output. Apparently, the configuration registers no routes at all.
{
"statusCode":404,
"error":"Serverless-offline: route not found.",
"currentRoute":"get - /endpoint/ping",
"existingRoutes":[]
}
Related: I am also attempting to solve the same problem using aws-sam, at the following post - API Gateway HTTP Proxy integration with aws-sam (NOT Lambda Proxy)
By default serverless-offline doesn't parse your resources for endpoints, enable it via custom config.
custom:
serverless-offline:
resourceRoutes: true
Ends up serving:
Serverless: Routes defined in resources:
Serverless: ANY /endpoint/{proxy*} -> http://endpoint.company.cool/{proxy}
Serverless: Offline listening on http://localhost:3000
Documentation

Unable to transform request to binary

To implement binary support in api-gateway for file upload, i have used serverless-apigw-binary plugin and added necessary content types which should be converted by api-gateway.
This is my serverless.ymlfile
service: aws-java-gradle
provider:
name: aws
runtime: java8
stage: dev
region: us-east-1
custom:
apigwBinary:
types:
- 'application/pdf'
plugins:
- serverless-apigw-binary
package:
artifact: build/distributions/hello.zip
functions:
uploadLoadFiles:
handler: com.serverless.UploadFileHandler
role: UploadFileRole
timeout: 180
events:
- http:
integration: lambda
path: upload
method: put
cors:
origin: '*'
headers:
- Content-Type
- X-Amz-Date
- Authorization
- X-Api-Key
- X-Amz-Security-Token
- X-Amz-User-Agent
- X-Requested-With
allowCredentials: false
request:
passThrough: WHEN_NO_TEMPLATES
template:
application/pdf: '{ "operation":"dev-aws-java-gradle", "base64Image": "$input.body", "query": "$input.params("fileName")" }'
response:
template: $input.body
statusCodes:
400:
pattern: '.*"httpStatus":400,.*'
template: ${file(response-template.txt)}
401:
pattern: '.*"httpStatus":401,.*'
template: ${file(response-template.txt)}
403:
pattern: '.*"httpStatus":403,.*'
template: ${file(response-template.txt)}
500:
pattern: '.*"httpStatus":500,.*'
template: ${file(response-template.txt)}
501:
pattern: '.*[a-zA-Z].*'
template: ${file(unknown-error-response-template.txt)}
environment:
S3BUCKET: ${env:S3_BUCKET}
APS_ENV: ${env:APS_ENV}
# you can add CloudFormation resource templates here
resources:
Resources:
UploadFileRole:
Type: AWS::IAM::Role
Properties:
RoleName: "UploadFileRole"
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: loggingPolicy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource:
- 'Fn::Join':
- ':'
-
- 'arn:aws:logs'
- Ref: 'AWS::Region'
- Ref: 'AWS::AccountId'
- 'log-group:/aws/lambda/*:*:*'
all the necessary settings got implemented in api-gateway after doing sls deploy
(checked based on this article https://aws.amazon.com/blogs/compute/binary-support-for-api-integrations-with-amazon-api-gateway/)
but when i hit my end point api gateway is giving out an error like this
Verifying Usage Plan for request: 45bac722-f039-11e7-bcc6-f9a1aa509052. API Key: API Stage: fd4ue8lpia/int
API Key authorized because method 'PUT /upload' does not require API Key. Request will not contribute to throttle or quota limits
Usage Plan check succeeded for API Key and API Stage fd4ue8lpia/int
Starting execution for request: 45bac722-f039-11e7-bcc6-f9a1aa509052
HTTP Method: PUT, Resource Path: /upload
Method request path:
{}
Method request query string:
{}
Method request headers: {Accept=*/*, CloudFront-Viewer-Country=IN, postman-token=6f1a23ba-36c2-104d-2e12-dc2c76a985ee, CloudFront-Forwarded-Proto=https, CloudFront-Is-Tablet-Viewer=false, origin=chrome-extension://aicmkgpgakddgnaphhhpliifpcfhicfo, CloudFront-Is-Mobile-Viewer=false, User-Agent=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36, X-Forwarded-Proto=https, CloudFront-Is-SmartTV-Viewer=false, Host=fd4ue8lpia.execute-api.us-east-1.amazonaws.com, Accept-Encoding=gzip, deflate, br, X-Forwarded-Port=443, X-Amzn-Trace-Id=Root=1-5a4c530e-2c1927f85440b71b66cf0c15, Via=2.0 fc18daf68173838631b562fe2efaf8f8.cloudfront.net (CloudFront), x-postman-interceptor-id=8fa8440f-0541-fdac-c60c-6019c2269a66, X-Amz-Cf-Id=P4PYoi5Wwb-NCYRSdTQ_5TdtpbQLBaEATXoyJGC7cS8g6LsoCRPbkg==, X-Forwarded-For=157.50.20.12, 52.46.37.156, content-type=application/pdf, Accept-Language=en-US,en;q=0.9, cache-control=no-cache, CloudFront-Is-Desktop-Viewer=true}
Method request body before transformations: [Binary Data]
Execution failed due to configuration error: Unable to transform request
Method completed with status: 500
and the request is not crossing api-gateway to lambda function.
But i followed a solution mentioned here
to edit lambda function name in integration request section of resource in api-gateway to the same name and click ok. Then error's were gone and working fine.
I checked for changes in roles after doing that but none were found.
Can any one suggest what could have happened there, and any better solutions for the above mentioned problem.
Thanks in advance.
But i followed a solution mentioned here to edit lambda function name
in integration request section of resource in api-gateway to the same
name and click ok. Then error's were gone and working fine.
When you edit lambda function on the console, it sets up the permissions to call the lambda function automatically. If you want to do that manually, you can do that using the CLI