Flow testnet getting: An error occurred when interacting with the Access API - onflow-cadence

I'm trying to make a request to the testnet but getting the following error:
HTTP Request Error: An error occurred when interacting with the Access API.
transport=FetchTransport
error=Failed to fetch hostname=access.devnet.nodes.onflow.org:9000
path=/v1/scripts?block_height=sealed
method=POST
requestBody={
"script":"aW1wb3J0IEZ1bmdpYmxlVG9rZW4gZnJvbSAweDlhMDc2NmQ5M2I2NjA4YjcKaW1wb3J0IEZVU0QgZnJvbSAweGUyMjNkOGE2MjllNDljNjgKCnB1YiBmdW4gbWFpbihhZGRyZXNzOiBBZGRyZXNzKTogVUZpeDY0IHsKICAgIGxldCBhY2NvdW50ID0gZ2V0QWNjb3VudChhZGRyZXNzKQoKICAgIGxldCB2YXVsdFJlZiA9IGFjY291bnQuZ2V0Q2FwYWJpbGl0eSgvcHVibGljL2Z1c2RCYWxhbmNlKSEKICAgICAgICAuYm9ycm93PCZGVVNELlZhdWx0e0Z1bmdpYmxlVG9rZW4uQmFsYW5jZX0+KCkKICAgICAgICA/PyBwYW5pYygiQ291bGQgbm90IGJvcnJvdyBCYWxhbmNlIHJlZmVyZW5jZSB0byB0aGUgVmF1bHQiKQoKICAgIHJldHVybiB2YXVsdFJlZi5iYWxhbmNlCn0=",
"arguments":["eyJ0eXBlIjoiQWRkcmVzcyIsInZhbHVlIjoiMHg1ODA2MjJlNzQ1MTgzYjE2In0="]
}
The base64 decoded script is:
import FungibleToken from 0x9a0766d93b6608b7
import FUSD from 0xe223d8a629e49c68
pub fun main(address: Address): UFix64 {
let account = getAccount(address)
let vaultRef = account.getCapability(/public/fusdBalance)!
.borrow<&FUSD.Vault{FungibleToken.Balance}>()
?? panic("Could not borrow Balance reference to the Vault")
return vaultRef.balance
}
and the arguments are:
{"type":"Address","value":"0x580622e745183b16"}
When I run the following command in the cli, it works: flow scripts execute -n testnet scripts/getFUSDBalance.cdc --arg "Address:580622e745183b16"
not sure why I'm getting issues with this
EDIT:
didn't mention that the error is coming from FCL
Here's the code I'm using to interact with Flow:
async getFUSDBalance(): Promise<Result<number, string>> {
const scriptText = getFUSDBalance as string;
const user = await this.getCurrentUser();
return await flow.query<number>({
cadence: scriptText,
payer: fcl.authz,
authorizations: [fcl.authz],
args: (arg, t) => [
arg(user.addr, t.Address)
]
})
}
flow.query<T> is just a wrapper for fcl.query
It works in the CLI but not from FCL
EDIT 2:
So I found that the real issue I was facing was I was getting this error:
Fetch API cannot load access.devnet.nodes.onflow.org:9000/v1/scripts?block_height=sealed. URL scheme "access.devnet.nodes.onflow.org" is not supported
access.devnet.nodes.onflow.org:9000 is gRPC (I think). I should have been using https://rest-testnet.onflow.org. However, when I change to that I get a CORS violation. I think I read somewhere that you can't access the testnet from localhost (why?), but I deployed to a *.app domain and I'm getting the same error No 'Access-Control-Allow-Origin' header is present on the requested resource. Is the CORS policy not set up for *.app domains?

So turns out changing to https://rest-testnet.onflow.org for the accessNode.api value did work. Not sure why it didn't when I posted the second edit, but whatever. This works for localhost too.

Related

Writing log to gcloud Vertex AI Endpoint using gcloud client fails with google.api_core.exceptions.MethodNotImplemented: 501

Trying to use google logging client library for writing logs into gcloud, specifically, i'm interested in writing logs that will be attached to a managed resource, in this case, a Vertex AI endpoint:
Code sample:
import logging
from google.api_core.client_options import ClientOptions
import google.cloud.logging_v2 as logging_v2
from google.oauth2 import service_account
def init_module_logger(module_name: str) -> logging.Logger:
module_logger = logging.getLogger(module_name)
module_logger.setLevel(settings.LOG_LEVEL)
credentials= service_account.Credentials.from_service_account_info(json.loads(SA_KEY_JSON))
client = logging_v2.client.Client(
credentials=credentials,
client_options=ClientOptions(api_endpoint="us-east1-aiplatform.googleapis.com"),
)
handler = client.get_default_handler(
resource=Resource(
type="aiplatform.googleapis.com/Endpoint",
labels={"endpoint_id": "ENDPOINT_NUMBER_ID",
"location": "us-east1"},
)
)
#Assume we have the formatter
handler.setFormatter(ENRICHED_FORMATTER)
module_logger.addHandler(handler)
return module_logger
logger = init_module_logger(__name__)
logger.info("This Fails with 501")
And i am getting:
google.api_core.exceptions.MethodNotImplemented: 501 The GRPC target
is not implemented on the server, host:
us-east1-aiplatform.googleapis.com, method:
/google.logging.v2.LoggingServiceV2/WriteLogEntries. Sent all pending
logs.
I thought we need to enable api and was told it's enabled, and that we have: https://www.googleapis.com/auth/logging.write
what could be causing the error?
As mentioned by #DazWilkin in the comment, the error is because the API endpoint us-east1-aiplatform.googleapis.com does not have a method called WriteLogEntries.
The above endpoint is used to send requests to Vertex AI services and not to Cloud Logging. The API endpoint to be used is the logging.googleapis.com as shown in the entries.write method. Refer to this documentation for more info.
The ClientOptions() function should have logging.googleapis.com as the api_endpoint parameter. If the client_options parameter is not specified, logging.googleapis.com is used by default.
After changing the api_endpoint parameter, I was able to successfully write the log entries. The ClientOptions() is as follows:
client = logging_v2.client.Client(
credentials=credentials,
client_options=ClientOptions(api_endpoint="logging.googleapis.com"),
)

How do I configure my Postman mock server response to return a date always two days in the past?

In my Postman Mock Server, I have set up a GET request to return JSON, in which the following is returned
“due_date":"2021-10-10"
What I would like is to adjust the response so that the date is returned is two days in the past. So if today is “2021-10-10”, I would like the response to contain
“due_date":"2021-10-08”
And if today is “2022-01-01”, I would like the response to contain
“due_date":"2021-12-30”
And so on. How do I set up my Postman mock server request to return such data?
I think it's a good question besides I'm curious so I made some research and found a workaround for this. It's a bit complex. I'm not sure worth it or not.
The first thing all, Postman Mock Server (in short Mock Server) cannot execute any test and pre-script so it is not capable to compute things. You need a calculation here so what are you gonna do? Well, you can define an environment for Mock Server which gives you the ability to use dynamic values in mock responses.
I will continue step by step to show the process.
1 - Open a Mock Server with an environment:
1.1 - Create a collection for the new Mock Server:
Your mock response will look like below:
{"due_date": "{{date}}"}
1.2 - Create an environment:
1.3 - Finish to create:
1.4 - When you finish, Postman creates a collection like below:
1.5 - You can test your Mock Server from this collection:
As you can see, Mock Server uses the environment variable in their response.
Now, We have to figure out how to update the environment variable.
You have to use an external service to update your environment variable. You can use Postman Monitor for this job because it can execute tests (means any code) and works like a CRON job which means you can set a Postman Monitor to update a specific environment variable every 24 hours.
2 - Open a Postman Monitor to update your environment:
2.1 - This step is pretty straightforward, create a Postman Monitor like the below configuration:
2.2 - Write a test to update the environment:
The test will look like below:
// you have to use pm.test() otherwise Postman Monitor not execute the test
const moment = require("moment");
pm.test("update date", () => {
// set date 2 days past
let startdate = moment();
const dayCount = 2;
startdate = startdate.subtract(dayCount, "days");
startdate = startdate.format("YYYY-MM-DD");
// this is not work on Postman Monitor, use Postman API like below
//pm.environment.set('date', startdate);
const data = JSON.stringify({
environment: {
values: [
{
key: "date",
value: startdate,
},
],
},
});
const environmentID = "<your-environment-id>";
// Set environment variable with Postman API
const postRequest = {
url: `https://api.getpostman.com/environments/${environmentID}`,
method: "PUT",
header: {
"Content-Type": "application/json",
"X-API-Key":
"<your-postman-api-key>",
},
body: {
mode: "raw",
raw: data,
},
};
pm.sendRequest(postRequest, (error, response) => {
console.log(error ? error : response.json());
// force to fail test if any error occours
if (error) pm.expect(true).to.equal(false);
});
});
You cannot change an environment variable with pm.environment when you using Postman Monitor. You should use Postman API with pm.sendRequest in your test.
You need to get a Postman API key and you need to learn your environment id. You can learn the environment id from Postman API.
To learn your Environment ID, use this endpoint: https://www.postman.com/postman/workspace/postman-public-workspace/request/12959542-b7ace502-4a5a-4f1c-8164-158811bbf236
To learn how to get a Postman API key: https://learning.postman.com/docs/developer/intro-api/#generating-a-postman-api-key
2.3 - Run Postman Monitor manually to make sure tests are working:
2.4 - As you can see Postman Monitor execute the script:
2.5 - When I check the environment, I can see the result:
You can test from browser to see results:
I have answered this question earlier but I have another solution.
You can deploy a server to update the variable from your mock environment. If you want to do it for free, just use Heroku.
I wrote a Flask app in Python and deploy it to Heroku, check below code:
from flask import Flask
import os
import json
import requests
from datetime import datetime, timedelta
app = Flask(__name__)
# the port randomly assigned and then mapped to port 80 by the Heroku
port = int(os.environ.get("PORT", 5000))
# debug mode
debug = False
#app.route('/')
def hello_world():
N_DAYS_AGO = 2
# calculate date
today = datetime.now()
n_days_ago = today - timedelta(days=N_DAYS_AGO)
n_days_ago_formatted = n_days_ago.strftime("%Y-%m-%d")
# set environment
payload = json.dumps({
"environment": {
"values": [
{
"key": "occupation",
"value": n_days_ago_formatted
}
]
}
})
postman_api_key = "<your-postman-api-key>"
headers = {
'Content-Type': 'application/json',
'X-API-Key': postman_api_key
}
environment_id = "<your-environment-id>"
url = "https://api.getpostman.com/environments/" + environment_id
r = requests.put(url, data=payload, headers=headers)
# return postman response
return r.content
if __name__ == '__main__':
app.run(debug=debug, host='0.0.0.0', port=port)
Code calculates the new date and sends it to Mock Environment. It worked, I tested it in Heroku before this answer.
When you go to your Heroku app's page the code will trigger and the date environment automatically will update, use the environment variable in your mock server to solve the problem.
You need to automate this code execution so I suggest you use UptimeRobot to ping your Heroku app 1 time a day. On every ping, your environment variable will update. Don't overuse it because Heroku has a usage quota for the free plan.
To use this code you need to learn how to deploy a Flask app on Heroku. By the way, Flask is just an option here, you can use NodeJS instead of Python, the logic will stay the same.

How to solve : "Received response status [FAILED] from custom resource. Message returned: Resource is not in the state certificateValidated"? CDK

I have the following error trying to create a static website inspired by https://github.com/aws-samples/aws-cdk-examples/blob/master/typescript/static-site/static-site.ts
const certificateArn = new acm.DnsValidatedCertificate(
this,
"SiteCertificateR53",
{
domainName: props.siteDomain,
hostedZone: props.zone,
region: "us-east-1", // Cloudfront only checks this region for certificates.
}
).certificateArn;
new cdk.CfnOutput(this, "CertificateR53", {value: certificateArn});
Error:
Received response status [FAILED] from custom resource. Message returned: Resource is not in the state certificateValidated
If you don't need to do cross region stuff (e.g. us-east-1 needs a resource from us-west-2) using the following method provides the same benefit as DnsValidatedCertificate
const certificate = new acm.Certificate(this, `SiteCertificateR53`, {
domainName: props.siteDomain,
validation: acm.CertificateValidation.fromDns(props.zone)});
If you still gotta do cross-region stuff, then you should create and deploy your zone via AWS console first. That won't guarantee a fix though, this page can help if you're still stuck: https://docs.aws.amazon.com/acm/latest/userguide/troubleshooting-DNS-validation.html

ECONNREFUSED during 'next build'. Works fine with 'next dev' [duplicate]

This question already has an answer here:
Fetch error when building Next.js static website in production
(1 answer)
Closed last year.
I have a very simple NextJS 9.3.5 project.
For now, it has a single pages/users and a single pages/api/users that retrieves all users from a local MongoDB table
It builds fine locally using 'next dev'
But, it fails on 'next build' with ECONNREFUSED error
page/users
import fetch from "node-fetch"
import Link from "next/link"
export async function getStaticProps({ params }) {
const res = await fetch(`http://${process.env.VERCEL_URL}/api/users`)
const users = await res.json()
return { props: { users } }
}
export default function Users({ users }) {
return (
<ul>
{users.map(user => (
<li key={user.id}>
<Link href="/user/[id]" as={`/user/${user._id}`}>
<a>{user.name}</a>
</Link>
</li>
))}
</ul>
);
}
pages/api/users
import mongoMiddleware from "../../lib/api/mongo-middleware";
import apiHandler from "../../lib/api/api-handler";
export default mongoMiddleware(async (req, res, connection, models) => {
const {
method
} = req
apiHandler(res, method, {
GET: (response) => {
models.User.find({}, (error, users) => {
if (error) {
connection.close();
response.status(500).json({ error });
} else {
connection.close();
response.status(200).json(users);
}
})
}
});
})
yarn build
yarn run v1.22.4
$ next build
Browserslist: caniuse-lite is outdated. Please run next command `yarn upgrade`
> Info: Loaded env from .env
Creating an optimized production build
Compiled successfully.
> Info: Loaded env from .env
Automatically optimizing pages ..
Error occurred prerendering page "/users". Read more: https://err.sh/next.js/prerender-error:
FetchError: request to http://localhost:3000/api/users failed, reason: connect ECONNREFUSED 127.0.0.1:3000
Any ideas what is going wrong ? particularly when it works fine with 'next dev' ?
Thank you.
I tried the same few days ago and didn't work... because when we build the app, we don't have localhost available... check this part of the doc - https://nextjs.org/docs/basic-features/data-fetching#write-server-side-code-directly - that said: "You should not fetch an API route from getStaticProps..." -
(Next.js 9.3.6)
Just to be even more explicit on top of what Ricardo Canelas said:
When you do next build, Next goes over all the pages it detects that it can build statically, i.e. all pages that don't define getServerSideProps, but which possibly define getStaticProps and getStaticPaths.
To build those pages, Next calls getStaticPaths to decide which pages you want to build, and then getStaticProps to get the actual data needed to build the page.
Now, if in either of getStaticPaths or getStaticProps you do an API call, e.g. to a JSON backend REST server, then this will get called by next build.
However, if you've integrated both front and backend nicely into a single server, chances are that you have just quit your development server (next dev) and are now trying out a build to see if things still work as sanity check before deployment.
So in that case, the build will try to access your server, and it won't be running, so you get an error like that.
The correct approach is, instead of going through the REST API, you should just do database queries directly from getStaticPaths or getStaticProps. That code never gets run on the client anyways, only server, to it will also be slightly more efficient than doing a useless trip to the API, which then calls the database indirectly. I have a demo that does that here: https://github.com/cirosantilli/node-express-sequelize-nextjs-realworld-example-app/blob/b34c137a9d150466f3e4136b8d1feaa628a71a65/lib/article.ts#L4
export const getStaticPathsArticle: GetStaticPaths = async () => {
return {
fallback: true,
paths: (await sequelize.models.Article.findAll()).map(
article => {
return {
params: {
pid: article.slug,
}
}
}
),
}
}
Note how on that example, both getStaticPaths and getStaticProps (here generalized HoC's for reuse, see also: Module not found: Can't resolve 'fs' in Next.js application ) do direct database queries via sequelize ORM, and don't do any HTTP calls to the external server API.
You should then only do client API calls from the React components on the browser after the initial pages load (i.e. from useEffect et al.), not from getStaticPaths or getStaticProps. BTW, note that as mentioned at: What is the difference between fallback false vs true vs blocking of getStaticPaths with and without revalidate in Next.js SSR/ISR? reducing client calls as much as possible and prerendering on server greatly reduces application complexity.

gcloud deploy function error code 3

I'm doing the tutorial of basic fulfillment and conversation setup of api.ai tutorial to make a bot for facebook messenger, and when I try to deploy the function with the command:
gcloud beta functions deploy testBot --stage-bucket testbot-e9bc4.appspot.com --trigger-http
(where 'testBot' is the name of the project and 'testbot-e9bc4.appspot.com' is the bucket_name, I thought..)
It return the following error message:
ERROR: (gcloud.beta.functions.deploy) OperationError: code=3, message=Source code size exceeds the limit
I've searched but not found any answer, I don't know where is the error.
this is the JS file that appear in the tutorial:
/
HTTP Cloud Function.
#param {Object} req Cloud Function request context.
#param {Object} res Cloud Function response context.
*/
exports.helloHttp = function helloHttp (req, res) {
response = "This is a sample response from your webhook!" //Default response from the webhook to show it's working
res.setHeader('Content-Type', 'application/json'); //Requires application/json MIME type
res.send(JSON.stringify({ "speech": response, "displayText": response
//"speech" is the spoken version of the response, "displayText" is the visual version
}));
};
Make sure you are in the correct directory where your go code resides before executing gcloud beta functions deploy testBot --stage-bucket testbot-e9bc4.appspot.com --trigger-http command.
I was working in the express project. In my case, I mistakenly installed the #google/storage package in devDependencies instead of dependencies. I unable to notice as I tested the project in debug mode using mocha. So in debug, it is able to find that package in devDependencies but on deploy function, it tries to find in dependencies in package.json but unable to find there.
Open command prompt at the location where index.js was created and run the above gcloud command.