Metaflow: "Missing authentication token" when accessing the metadata/metaflow service URL in the browser - netflix-metaflow

I’m currently experimenting on Metaflow. I followed the documentation and was able to deploy an aws setup with the given cloud formation template.
My question is why is that I’m always getting a:
message: "Missing Authentication Token"
when I access METAFLOW_SERVICE_URL in the browser, even if I made sure that the APIBasicAuth was set to false during the creation of cloudformation?
Shouldn’t this setting make the metadata/metaflow service accessible without the authentication/api key?
How can I resolve this? Or is this expected? That is, I cannot really view the metadata/metaflow service url via browser?
Thanks in advance

This was resolved under this github issue.
You still need to set the x-api-key header if you are trying to access the service url via the browser. To get the api-key you can go to the aws console
Api Gateway -> Api Keys -> show api key
Alternatively you can use the metaflow client in the sagemaker notebook which should be automatically setup for you via the template.
Also worth mentioning that there are two sets of endpoints: The one provided by the api gateway (which you seem to be hitting) and the one provided by the service itself. The api gateway forwards the requests the the service endpoints but needs the x-api-key to be set in the header. You can probably try hitting the service endpoints directly since you disabled auth.

Related

AWS - API Gateway - HTTPS Request returning 404 Not Found

I am working on creating a new request in AWS API Gateway. I am having issues with a 404 not found on the URL request.
The request (had to create fake one for the question):
GET https://hello.stackoverflow.com/services/misc/myroute/v1/swagger.json
I created a route in API Gateway ANY /services/misc/myroute/{proxy+}
I attached the route to a Load Balancer Listener integration
I set up the listener rule in the Load Balancer:
IF Path is /services/misc* Then Forward to Target
IF Requests otherwise not routed Then Forward to Default
Created logs for this system in the AWS API Gateway: Monitor -> Logging -> Set Log Destination
Set variables for the log format using the $context variables, Context Variables
Ex Log:
{ "requestId":"QWRHQKWFHWAFZ=",
"routeKey":"ANY /services/misc/myroute/{proxy+}",
"path":"/services/misc/myroute/v1/swagger.json",
"domain":"hello.stackoverflow.com",
"domain_prefix":"hello",
"httpMethod":"GET", "status":"404","protocol":"HTTP/1.1", "endpoint":-" }
One final check I have done to make sure its completing its "route" was see the requests in the monitoring and seeing the 4xx come from this ALB listener.
I can send the request via localhost and get a response with the json body
GET https://localhost:8080/v1/swagger.json --> Status 200 OK with body filled
In my quest to solve the issue, it has lead me to many older (2019) stack overflow questions that seem to be outdated with the AWS Console, same with the AWS documentation. See links below...
AWS API Gateway Method request path parameter not working
AWS API Gateway 404 page not found error when invoking endpoint url
AWS API Gateway Method request path parameter not working
With this being my first project in the AWS cloud space, I am not sure where else to turn. My guess would be the authentication headers from the API Gateway are being lost, but not sure where I can see this loss happening.
From my understanding of how the AWS Request Flow goes, I created this diagram:

mendix Swagger REST Webservice in UAT/PROD

Hi I am working on the mendix rest webservices and through swagger , i can test the local host data all right.
But when i promote objects to acceptance, i need to update the webservice seurity to 'Requires authentication'. This would need username and password.
when the webservices are in the UAT, it fails to autheticate the request as below shown. Can you please help if you have a solution in this situation?
thanks
This could be due to the Path based access restrictions in your cloud environment. Allow all access to the appropriate paths as below to verify your endpoints.
Restart your application after applying the changes.

Close proxy API access

Close proxy API access
Hi community,
Grafana 8.2.5
We have a Grafana system 8.2.5. He had a security audit, where the API access is criticized.
We have enabled an anonymous acess for users without login.
[auth.anonymous]
enabled =true
org_name = IT.NRW
org_role = Viewer
When I try to access the Grafana like:
curl http://<fqdn>:3000/api/datasources -> {"message":"Permission denied"}
curl http://admin:<password>#<fqdn>:3000/api/datasources -> a valid json object with the datasource etc....
But the security audit found also the access to the datasource proxy? API.
curl http://<fqdn>:3000/api/datasources/proxy/3/query?db=<db>\&q=SELECT+*+FROM+<ts>\&epoch=ms
So I can query with or without credentials ALWAYS the API.
Security audit: a Denial of Service (DoS) is possible, maybe some SQL injection.
I don't want discuss this topic here.
I have to close the access through the API. At least from other network segments.
Any hints?
Thanks in advance.
I'm a grafana beginner!
I do not complain, the security audit listed the two topics (DoS/SQL injection).
I didn't found any configuration possibilities (grafana.ini) about closing the proxy API interface (only data_source_whitelist-ing).
So, I added some rules into the NGIX config in front of the grafana server to
forbid the proxy API access -> throw 40x error.
Now the web UI is not able anymore to fetch and render the data in the UI.
My conclusion:
the grafana architecture define: the proxy API will be used by the web UIs.
with or without credentials: a user can fire a query (DoS) using the proxy API
with or without credentials: the query is pass through the proxy API to the datasource, potential sql injection is possible

What's the hostname of openshift master server for internal access?

If I want to access the REST API of the openshift master server from anywhere in my company I use https://master.test04.otc-test.company.com:8443 which works just fine.
Now I'm writing an admin application that is accessing the REST API and is deployed in this openshift cluster. Is there a generic name or environment variable in openshift to get the hostname of the master server?
Background: My admin application will be deployed on multiple openshift clusters which do not have the same URL. It would be very handy to have them autodiscover the hostname of the current master server instead of configuring this value for every deployment.
Use environment variables:
https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT
In the container, unless service account details are not being mounted, you can also access the directory:
/var/run/secrets/kubernetes.io/serviceaccount
In this you can then find a token file which contains the access token for the service account the container runs as. This means you can create a separate service account for the application in that project, and use RBAC to control what it can do via the REST API.
That same directory also has a namespace file so you know what project the container is running in, and files with certificates to use when accessing the REST API over a secure connection.
This is the recommended approach, rather than trying to pass an access token to your application through its configuration.
Note that in OpenShift 4, if you need to access the OAuth server endpoint, it is on a separate URL to what the REST API is. In 3.X, they were on the same URL.
In 4.0, you can access the path /.well-known/oauth-authorization-server on the REST API URL, to get information about the separate OAuth server endpoint.
For additional information on giving REST API access to an application via a service account, see:
https://cookbook.openshift.org/users-and-role-based-access-control/how-do-i-enable-rest-api-access-for-an-application.html
Note that that page currently says you can use https://openshift.default.svc.cluster.local as URL, but this doesn't work in OpenShift 4.

Implementing aws apigateway

I am trying to test an implementation of aws apigateway on an existing webapplication's REST endpoint on aws. This endpoint is for bulk updates using POST/PATCH methods
Looking into the vast and lengthy documentation on AWS site, it talk about IAM roles for authentication.
Any high-level tips on implementing API gateways will be appreciated, to get started.
Choosing IAM Role in Authorization and also Choosing Other Authorizers (Lambda or Cognito) are also optional.
Do simple Steps and you are ready.
Create an API.
Goto Resources>>Actions>>create Method (POST/PATCH).
Integration Type Choose HTTP and enter your endpoint Url
Resources >> Action >> Deploy API
It will deploy apigateway application and provide you Endpoint url to use.
Again:
Choosing Models, API Keys, Client Certificates, Custom Domain, Authorizers and VPC setup all are optional.
Its simple and easy.