After my VPS provider moved their servers to a new location, I get AccessDeniedException: 403 This service is not available from your region error for every gsutil request.
New server IP where gsutil doesn't work is 51.254.184.21, previously it was 88.198.255.218, and that one worked ok.
I found related issue from bigquery where the similar error was caused by incorrect mapping of IP addresses on Google's side.
I am asking here on SO, because that's where Google routes their support page.
Related
I'm simply trying to use Serverless and Lambda to make get and post requests to my Atlas cluster. I've followed all the tutorials below which are very similar:
https://hackernoon.com/building-a-serverless-rest-api-with-node-js-and-mongodb-2e0ed0638f47
https://dev.to/adnanrahic/a-crash-course-on-serverless-apis-with-express-and-mongodb-193k
https://dev.to/saigowthamr/build-and-deploy-a-rest-api--using-serverless-express-and-nodejs-3331
https://blog.eduonix.com/web-programming-tutorials/serverless-development-nodejs-aws-lambda/?unapproved=84149&moderation-hash=9ac99ba21b72d6be12fbb14c1005a540#comment-84149
Using Insomnia or Postman I can make get and post requests to a locally hosted database but not to an Atlas cluster. The requests always result in 502 Bad Gateway with JSON message of "internal server error". I've tried switch up the cluster host from AWS to Azure and that didn't help. In regard to the various drivers I've tried every variety of connection string. I've whitelisted all ips so access is not an issue. Please help.
I deleted the cluster, project, organization, and users in my Atlas account and started over. I didn't manually create a new user through the Network Access tab but rather created a user with the prompt you get when creating a Cluster. I noticed there was a note regarding special characters when making a connection:
When entering your password, make sure that any special characters are URL encoded.
My previous password used an # symbol which may have been causing problems. So be careful when using email addresses as user names. I assigned 0.0.0.0/0 to the user as usual. I don't think it matters but I only have one user and I added a comment to that user as well. Now I can make a connection using the 2.2.12 or later Node.js driver connection string. I hope this helps someone.
I am running a ruby Sinatra server on my development machine with ngrok.
I have verified that the accessing the publicly exposed url through ngrok does get routed to the ruby server and the correct response is returned.
I also used apitester.com to verify that the exposed url is accessible from the internet and the correct response is returned.
When I attempt to execute using the Alexa Simulator through the alexa developer console I only get "I am unable to reach the requested skill". I get the same response using a physical echo also.
I have double checked the endpoint configuration of the developer console and everything looks like ok to me.
I am using https for the endpoint with the "My development endpoint is a sub-domain of a domain that has a wildcard certificate" as the SSL certificate type.
Using the JSON request that is generated when attempting to use the Alexa Simultor does successfully send the request to my Sinatra server and the appropriate response is returned. This eliminated my concern that this was related to the sinatra/ngrok configuration, but it continues to fail when entering text (or speaking) into the simulator.
This is my first attempt at creating an Alexa skill, so I may be overlooking something obvious.
Does anyone have any suggestions?
Solved
I had set the default and North America endpoint urls containing the same URL.
Removing the optional North America endpoint url solved the problem for me.
When using the LetsEncrypt certbot to generate an SSL certificate for my domain, I am prompted to make a file available at my domain to verify my control at my domain:
http://example.com/.well-known/acme-challenge/XXXXXX
However when I try to upload that file to my Google Cloud Storage bucket I get the following error:
$ gsutil rsync -R . gs://example.com
Building synchronization state...
Starting synchronization
Copying file://./.well-known/acme-challenge/XXXXXX [Content-Type=application/octet-stream]...
BadRequestException: 400 ACME HTTP challenges are not supported.
Does Google Cloud Storage expressly forbid URLs with "acme challenge" in the path? Is it possible to setup a LetsEncrypt certificate for a domain hosted at a Google Cloud Storage bucket?
We worked around this by exposing /.well-known/acme-challenge as an endpoint and storing the challenge at a different directory that is allowed by Cloud Storage. When LE hits that endpoint we retrieve the generated challenge from it's directory and serialize it in the response.
I have an AWS EC2 Jira instance running behind an AWS Classic load balancer. The site loads in the browser fine, but all API requests are returning 404 for some reason. It is not a Jira 404, but a generic 404 response with no body and minimal headers. Only response useful header seems to be Server: nginx.
Tried white-listing my client IP, opening up all ports, sending request to the LB and directly to the instance with proper security group settings, etc., but same 404 response is returned. I'm using Postman to test the API. I noticed when I load the EC2 instance directly in the browser, it redirects to the load balancer.
Returns 200 with HTML. Basic auth works, too.
GET http://jira (home page)
Returns 404:
GET http://jira/rest/api/2/issue/ticket-num (or any other /rest/ endpoints)
Where should I start looking to debug this 404 issue? I feel like I'm missing something basic. I'm not seeing any Jira configuration for setting up its rest API. I feel like perhaps it's a server configuration issue, although I've never come across manual web server configuration while installing Jira, so maybe on the AWS's side?
EDIT: still waiting to get ssh access to the instance, so I'll update as I get more info and access.
This HTTP 404 responses with very limited set of headers could be from the default (the bottom one) rule in ELB. I experienced similar issue getting HTTP 404 because instead of host header I set path and provided the host domain name in one of ELB rules. So the rule did not work and default rule returned 404 because there is no such path exists on the instance.
I would recommend to try to use Redirect to or Return fixed response options for default rule to check out if it goes to the default rule.
When requesting the Facebook/Graph API with a Facebook App, I get the error (#5) Unauthorized source IP address. Searching the internet I found that adding the server's IP to the app's whitelist may work. But when I do that, I get the following error: Uncaught OAuthException: This IP can't make requests for that application.
The server has definitely only the IP address I added to the whitelist. Using another server with the same app works just fine. I suppose this is due to a bug in our application requesting the API too often with invalid keys (all from this very IP).
So, to me, this seems like something we need to contact Facebook for, so our IP gets unlisted. Somebody has a idea on how to do it?
First, access canhazip.com or jsonip.com from the server to make sure it has the public IP you think it does. Second, make sure that IP address is in "Server IP Whitelist" for the app's Settings > Advanced section in the Developer console (https://developers.facebook.com/apps/[APP ID]/settings/advanced/).