Istio - Dynamic request routing based on header-values - kubernetes

Dynamic request routing based on header-values
For our QA environment we need to configure a special kind of routing for the incoming (Ingress), but also for the outgoing (Egress) requests. So for outgoing requests the rule should evaluate a header value with a regex and capture a value from the header and build with that value the URL where the request should be redirected. The value in the header is dynamically changed, so the redirect URL can not be hardcoded.
For example if the outgoing requests goes to services-master.anydomain.com, but there's a header value forwarded-for-feature with the value verbu-1234 the request should be redirected services-verbu-1234.anydomain.com.
For incoming requests it's a similar condition. If the origin points to webapp-verbu-1234.anydomain.com, but the request goes to services-master.anydomain.com the regex should extract verbu-1234 from the origin domain and replace master in the URL with the extracted value.
I know, that it's possible to use a regex to match header values, but I'm not sure, if it's possible to use captured values from a match to influence the target URL, at least I couldn't find that in the documentation.

I don't think this is possible
But if your QA system knows the features available, and you need to do that in Istio, you might try creating a VirtualService for each feature. And multiple VirtualServices would be merged by Istio...

Related

How can I use an Azure Front Door Rules Engine match condition to only match requests to the root of a site?

I'm trying to set up a set of rules on my Azure Front door to redirect all requests to the root of a site to a set of language based subfolders based on the location match of the incoming request.
Doing the Geo-location part is fairly straightforward, but I'm not having much success limiting the requests to only the root of the site - or at least when I try to do so, my rules don't appear to match and I don't get the redirect I'm expecting.
I've tried setting the above conditions:
IF "Request Path" EQUAL "/"
AND IF "Remote address" "Geo Match" "Switzerland, CH"
THEN "Routing Configuration" "Redirect" "307"
Host: Preserve;
Destination Path: Replace: "/de-ch/"
However I don't appear to be getting the redirect when requesting the root of the site from a browser based in Switzerland.
I can't find any actual examples for using the Rules Engine with either Path or matching, so I'm wondering if I should be using "Request URL" (and therefore I'll need to put the scheme and host in there, which is less than ideal as ruleset may be working with multiple front end hosts), or should what I'm doing work?
The "Request Path" match condition appears to match on the path after the initial /, for example given a request for:
https://www.example.com/folder/page.html
The following values are used in the match conditions:
Request Path: folder/page.html
Request URL: https://www.example.com/page.html
Request File Extension: html
Request Filename: page.html
I therefore had to use the Request URL condition and limit my rules to the specific domain in the request to ensure that we were only matching the root requests.
I have not tried specifying an operator of Not Any yet, although that could also be a solution (we needed more that 25 rules, which is a further limitation, so ended up using a different solution).
Zhaph said they have not tried the Not Any operator at the time of writing.
I've just used it and I can confirm Not Any works for matching just the root of the domain/subdomain. Definitely takes the hassle out of creating multiple match conditions on Request URL.

AWS Api Gateway Setting the header value to a default value using http Integration

I am using AWS API Gateway and I want to set my Integration type to http. I have the integrated url as https:// xxxxxx.com which takes a header "apikey". I am not expecting the end user to pass the header rather I want to set the apikey to some constant value.
I see that there is a way to force the user to make him pass the header(by making header required under the Method Request section. However, I want to set it to default.
For example in all the requests which are internally calling the URL inside the API gateway should pass the header value as "12345".
You can add/remove/override headers with an Integration Request Mapping Template.
In the API Gateway console, chose the relevant api/resourece/method. Go to Integration Request > Mapping Templates and chose your Content-Type (if requests are going to be received without a Content-Type header, set the Content-Type for the mapping template to application/json, which is the default behaviour).
Then in the actual mapping template add the following:
{
#set($context.requestOverride.header.apikey= "testMe")
}
This will add (or overwrite if it already exists) a header called apikey with the value "testMe" to all http requests downstream.
If you take this route, then you will need to also map over any other headers, path parameters, query parameters or body that you wish to pass through.
You could loop through the headers and query parameters like this.
## First set the header you are adding
#set($context.requestOverride.header.apikey= "testMe")
## Loop through all incoming headers and set them for downstream request
#foreach($param in $input.params().header.keySet())
#set($context.requestOverride.header[$param]= $input.params().header.get($param))
#if($foreach.hasNext) #end
#end
## Loop through all incoming query parameters and set them for downstream request
#foreach($param in $input.params().querystring.keySet())
#set($context.requestOverride.querystring[$param]= $input.params().querystring.get($param))
#if($foreach.hasNext) #end
#end
As you need to ensure that the header apikey is set to a default value, you should set the override for apikey before looping through the rest of the headers as only the first override will take effect.
The relevant AWS documentation can be found here.
The other alternative would be to point your API Gateway at a Lambda and make the call from the Lambda instead.
Firstly thanks to #KMO for his help. The following is the solution:-
Enable Http Proxy Integration.
Add the headers apikey=xxxx and Accept-Encoding=identity under the same Integration
Request -> Http Headers.
Under Settings -> Binary Media Types set the following as separate Binary Media Types
'*', */*. I mean as two different lines.This step is needed to resolve the Gzip action while returning the response.
Add the Query parameter country in the URL Query String Parameters section.
In the Integration Request map the country parameter to ctry by adding the value under mapped from as method.request.querystring.country. This will ensure that the query parameter country you passed in the main URL will be fed to the downstream url as parameter ctry.
The advantage of this apporoach is that, even if you override the header apikey, the one set under the Http Headers will take the precedence.

Redirect S3 subfolder to another domain with Cloudfront

I have a static showcase website hosted on S3 and using CloudFront, and an online shop (Prestashop) and a blog (Wordpress), both hosted on OVH servers.
I want to make a hidden redirection on two subfolders of my static website so it acts like my 3 websites are on the same host, using the following pattern :
mysite.com/ --> normal behaviour
mysite.com/blog/ --> myblog.com/
mysite.com/store/ --> mystore.com/
Of course, I need every request to be handled that way, eventually having something like that :
mysite.com/store/fr/1-myproduct.html
returns what
mystore.com/fr/1-myproduct.html
would have returned.
This seems really tricky, since I've found no real solution to my problem, and at this point I doubt it may even be possible to do such a thing.
I considered using a proxy but wouldn't that be like using a sledgehammer to get rid of a fly ?
I have searched for any possible redirection and I was only able to find subdomain/domain redirections...
So my question would be "How can I do that ?"
But right now I'm wondering "Can one do that ?"
P.S : It's my first post ever, I'm used to search for a long time before posting and I always end up finding a solution, except for now. Any suggestion is welcome.
I'll check about proxies since it's my last hope
Wait.
I have a static showcase website hosted on S3 and using CloudFront
CloudFront is a reverse proxy.
Depending on how much flexibility you have with the other two sites, CloudFront can potentially take you where you want to go, combining multiple independent sites under one hostname.
This is done by creating additional origin servers for your distributions and then creating additional cache behaviors, with path patterns matching the additonal paths, such as /blog and /blog/* that send requests to the alternate origins.
There is, however, a catch. CloudFront can't remove the matched pattern, so mainsite.example.com/blog/hello-world, matching the pattern /blog/* will be forwarded to blog.example.com/blog/hello-world -- not to blog.example.com/hello-world.¹ This will require changes to the other sites in order to integrate them in this way.
Unless...
If you already have unique path patterns, no problem, but if the extra sites' content is in the root of each individual site, you see the issue, here. Not insurmoubtable, but still an issue.
Your only alternative will be a reverse proxy behind CloudFront to rewrite those paths and send the requests on to the alternate servers. Truly not a big deal either, since HAProxy, Nginx, and Varnish all offer such functionality and can handle a large number of proxied requests on surprisingly small hardware.
The recently (2017) released Lambda#Edge service allows you to rewrite paths on the fly, as requests are processed, if necessary.
But the bottom line is that the reason you have not found a real solution other than a proxy is that there is no alternative -- every path at a given hostname must be handled in one logical place -- one group of one or more identically-configured endpoints. In the case of CloudFront, the logical place is physically distributed globally.
¹ CloudFront, natively, can actually prepend onto the path before forwarding the request, so requests for mainsite.example.com/bar/fizz can be forwarded to foosite.example.com/foo/bar/fizz by setting the origin path to /foo when you configure the origin. But it can't remove path parts or otherwise modify the path without also using Lambda#Edge. In the scenario discussed above, you would leave the origin path blank when configuring the additional origin servers.
Single S3 bucket with the following behavior :
domain.com-> serves the files from root of bucket
domain.com/blog -> serves the files from subfolder in S3 bucket (this is not default behavior)
How to :
https://aws.amazon.com/ru/blogs/compute/implementing-default-directory-indexes-in-amazon-s3-backed-amazon-cloudfront-origins-using-lambdaedge/
Lambda edge code:
'use strict';
exports.handler = (event, context, callback) => {
// Extract the request from the CloudFront event that is sent to Lambda#Edge
var request = event.Records[0].cf.request;
// Extract the URI from the request
var olduri = request.uri;
// Match any '/' that occurs at the end of a URI. Replace it with a default index
var newuri = olduri.replace(/\/$/, '\/index.html');
// Log the URI as received by CloudFront and the new URI to be used to fetch from origin
console.log("Old URI: " + olduri);
console.log("New URI: " + newuri);
// Replace the received URI with the URI that includes the index page
request.uri = newuri;
// Return to CloudFront
return callback(null, request);
};
Summary of code higher :
lambda edge rewrites the path "/blog/" to "/blog/index.html"

Is it bad practice to allow specifying parameters in URL for POST

Should parameters for POST requests (elements of the resource being created) be allowed to be added to the URL as well as in the body?
For example, let say I have a POST to create a new user at
/user
With the full set of parameters name, email, etc... in the body of the request.
However, I've seen many API's would accept the values in either the body or URL parameters like this:
/user?name=foo&email=foo#bar.com
Is there any reason this second option, allowing the parameters in the URL is bad practice? Does it violate any component of REST?
The intent of a query parameter is to help identify the target resource for a request. The body of a POST should be used to specify instructions to the server.
The query component contains non-hierarchical data that, along with
data in the path component (Section 3.3), serves to identify a
resource within the scope of the URI's scheme and naming authority
(if any).
    -- RFC 3986 Section 3.4
The hierarchical path component and optional query component serve
as an identifier for a potential target resource within that origin
server's name space.
    -- RFC 7230 Section 2.7.1
The Udacity Web Development course, be Steve Huffman (the man behind Reddit), recommends only using POST requests to update server side data. Steve highlights why using GET parameters to do so can be problematic.

Snort rule to verify content of an http request doesnt work

I am trying to verify the contents of the http response to find a content "abbb" in it.So my rule was
alert tcp MY_SERVER HTTP_PORTS -> any any(msg:"The page accessed has content abbb";to_client; established; content:"abb";sid:XXXXX; rev:x;)
unfortunately this rule seems not to work. Can anyone please tell if there is some issue with my rule.
For starters you need to fix the to_client part of the rule as this is not valid syntax. You will need to change this to be:
flow:to_client,established;
You can find more on flow here.
If you are just looking for the content "abbb" sent from your server to the client then you just need a simple content match like you have. I recommend using the fast pattern matcher here to improve the efficiency of the rule. So your content match would look something like:
content:"abbb"; fast_pattern:only;
Putting this together, your rule might look something like:
alert tcp MY_SERVER HTTP_PORTS -> any any(msg:"The page accessed has content abbb";
flow:to_client,established; content:"abbb"; fast_pattern:only; sid:XXXXX; rev:x;)
If this still isn't triggering then there is probably something else going on. Since you are just looking for this in the content you need to check your inspection depth in the http preprocessor. There is a server_flow_depth and a client_flow_depth. Try setting these to 0 (unlimited) and see if your rule is triggering after. For example if you had a client_flow_depth of 300 and the content "abbb" didn't come until after 500 bytes then the rule is never going to trigger because snort isn't configured to inspect that far into the payload.
If you have adaptive profiling enabled then you need to add the metadata service for http otherwise the rule won't match http traffic. This would look something like:
metadata:service http;
If you don't use adaptive profiling then it will use the ports in the rule header.