Cannot upload CandyMachine to mainnet-beta - Error in arweave-bundle.ts - metaplex

I already tested on DevNet and I am ready to deploy on mainnet-beta with the following command
ts-node ./src/candy-machine-v2-cli.ts upload -e mainnet-beta -k "C:\path\auth.json" -cp config.json -c temp ./assets
The only thing I changed from the DevNet config.json is "storage": "arweave-sol"
I am getting this error:
Starting upload for [3500] items, format {"mediaExt":".png","index":"0"}
1.169886324 SOL to upload 5185.453MB with buffer
Current balance 1.173224724 is sufficient.
Computed Bundle range, including 63 file pair(s) totaling 99.031MB.
Processing file groups...
Progress: [░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░] 0% | 0/63
upload was not successful, please re-run. TypeError: manifest.properties.files.forEach is not a function
at getUpdatedManifest (C:\Users\captainb\Documents\GitHub\metaplex\js\packages\cli\src\helpers\upload\arweave-bundle.ts:330:29)
at async processFiles (C:\Users\captainb\Documents\GitHub\metaplex\js\packages\cli\src\helpers\upload\arweave-bundle.ts:407:20)
at async processBundleFilePair (C:\Users\captainb\Documents\GitHub\metaplex\js\packages\cli\src\helpers\upload\arweave-bundle.ts:607:15)
at async processBundleFilePair (C:\Users\captainb\Documents\GitHub\metaplex\js\packages\cli\src\helpers\upload\arweave-bundle.ts:598:23)
at async processBundleFilePair (C:\Users\captainb\Documents\GitHub\metaplex\js\packages\cli\src\helpers\upload\arweave-bundle.ts:598:23)
at async processBundleFilePair (C:\Users\captainb\Documents\GitHub\metaplex\js\packages\cli\src\helpers\upload\arweave-bundle.ts:598:23)
at async processBundleFilePair (C:\Users\captainb\Documents\GitHub\metaplex\js\packages\cli\src\helpers\upload\arweave-bundle.ts:598:23)
at async processBundleFilePair (C:\Users\captainb\Documents\GitHub\metaplex\js\packages\cli\src\helpers\upload\arweave-bundle.ts:598:23)
at async processBundleFilePair (C:\Users\captainb\Documents\GitHub\metaplex\js\packages\cli\src\helpers\upload\arweave-bundle.ts:598:23)
at async processBundleFilePair (C:\Users\captainb\Documents\GitHub\metaplex\js\packages\cli\src\helpers\upload\arweave-bundle.ts:598:23)

Try using arweave-sol in your config.json file, or make sure you change the name of the -c parameter.

Yo know it was a very odd error, the same config file that worked on Devnet didn't work on Mainnet. It was that the json was malformed and had to conver the Attributes section to an array

Related

Test failed to load while creating a Store

I try to unit-test, but it fails to load:
dart:ffi new DynamicLibrary.open
package:objectbox/src/native/bindings/bindings.dart 21:28 loadObjectBoxLib
package:objectbox/src/native/bindings/bindings.dart 50:41 C
package:objectbox/src/native/model.dart 18:31 new Model
package:objectbox/src/native/store.dart 63:17 new Store
package:productivitie/features/to_do_listing/data/datasource/project_database_data_source_object_box.dart 23:15 new ProjectDataBaseDataSourceObjectBox.<fn>
dart:async _completeOnAsyncReturn
package:path_provider/path_provider.dart getApplicationDocumentsDirectory
Failed to load "F:\Programme\gitProgramme\productivitie\test\features\to_do_listing\data\datasource\project_database_data_source_object_box_test.dart": Invalid argument(s): Failed to load dynamic library (193)
My Constructor, where the problem occours:
21 ProjectDataBaseDataSourceObjectBox(){
22 getApplicationDocumentsDirectory().then((Directory dir){
23 store = Store(getObjectBoxModel() , directory: dir.path + '/objectbox' );
24 box = store!.box<Project>();
25 });
26
27 }
Flutter Doctor found no issues.
I build_run my models again (overwrote objectbox.g.dart file), didn't help.
My versions are:
objectbox: 0.14.0
objectbox_flutter_libs: any
path_provider: ^2.0.1
I first thought it was a problem with the path_provider, I did set a MockMethodCallHandler to return a mocked directory path if the path_provider tries to getApplicationDocumentsDirectory.
final directory = await Directory.systemTemp.createTemp();
const MethodChannel('plugins.flutter.io/path_provider').setMockMethodCallHandler((MethodCall call) async {
if(call.method == 'getApplicationDocumentsDirectory'){
return directory.path;
}
return null;
});
But that didn't help either.
The important part of the error is:
Failed to load dynamic library (193)
From the paths in your question, I assume you're running on Windows. In that case, you need to install the dynamic library so that the compiled test executable can find it. Respecting how Windows loads DLLs this can be either the same directory as where the .exe is, or a system directory. Following Dart CLI apps or Flutter desktop apps installation instructions should help:
Install objectbox-c system-wide (use "Git bash" on Windows):
bash <(curl -s https://raw.githubusercontent.com/objectbox/objectbox-dart/main/install.sh)
Copy the downloaded lib/objectbox.dll to C:\Windows\System32\ (requires admin privileges).

awaiting for aiofiles.os.remove() returns TypeError: object NoneType can't be used in 'await' expression

I'm trying to remove a file from a local directory asynchronously; however, I get the following error:
object NoneType can't be used in 'await' expression ()
I'm using ver aiofiles 0.5.0 and Python 3.6.5
my code is as straightforward as such:
async def delete_local_file(file_to_del):
await aiof.os.remove(file_to_del)
print("deleted: "+file_to_del)
await delete_local_file(localfile)
accidentally I used the wrong reference during import. simply use:
import aiofiles.os

Can I run multiple integration tests with one single config file in Flutter?

I am trying to write Flutter integration tests and to run them all with one config file instead of making config file for every single test. Is there any way to do that?
For now I have login.dart and login_test.dart and so on, for every single test. I know its convention that every config and test file must have the same name, but that's not what I need, more configurable things are welcomed. Thanks in advance.
This is my config file (login.dart)
import 'package:flutter_driver/driver_extension.dart';
import 'package:seve/main.dart' as app;
void main() {
enableFlutterDriverExtension();
app.main();
}
And test (login_test.dart) looks something like this
import ...
FlutterDriver driver;
void main() {
setUpAll(() async {
driver = await FlutterDriver.connect();
});
tearDownAll(() async {
if (driver != null) {
driver.close();
}
});
test('T001loginAsDriverAndVerifyThatDriverIsLogedInTest', () async {
some_code...
});
});
Now I want to make new test file (e.g login_warning.dart) and be able to start both tests by calling single config file (login.dart). Is that even possible?
Yes, running multiple "test" files with the same "config" is possible.
In the flutter jargon, your config file is your target and your test file is your driver. Your target is always login.dart but you have the two drivers login_test.dart and login_warning.dart.
With the flutter drive command, you can specify the target as well as the driver.
So in order to run both drivers, simply execute the following commands
flutter drive --target=test_driver/login.dart --driver=test_driver/login_test.dart
flutter drive --target=test_driver/login.dart --driver=test_driver/login_warning.dart
This executes first the login_test.dart and then the login_warning.dart driver.
You can always have one main test file that you initiate, like say
flutter drive --target=test_driver/app_test.dart
Then in that call your test groups as functions, like so -
void main() {
test1();
}
void test1() {
group('test 1', () {});}
So with one command you get to execute all the cases mentioned in the main()
Like vzurd's answer my favourit and cleanest is to create a single test file and call all main methods from within:
import './first_test.dart' as first;
import './second_test.dart' as second;
void main() {
first.main();
second.main();
}
Then just run driver on the single test file:
flutter drive --driver=test/integration/integration_test_driver.dart --target=test/integration/run_all_test.dart
to expand on to #sceee 's answer:
you can put the multiple commands into a shell script named integration_tests.sh for example and run them with a single command that way.
#!/bin/sh
flutter drive --target=test_driver/app.dart --driver=test_driver/app_test.dart
flutter drive --target=test_driver/app.dart --driver=test_driver/start_screen_test.dar
make executable:
$chmod a+rx integration_tests.sh
run it:
$./integration_tests.sh
We can use shell command to automate this process.
The following solution will work even with any new test files without manually adding its name to any of the files.
Create a shell script with name integrationTestRunner.sh inside the root directory. You can use the command
touch integrationTestRunner.sh
Inside integrationTestRunner.sh file, paste the following code.
#!/bin/bash
# Declare an array to store the file names and paths
declare -a targets
# Find all .dart files in the current directory and subdirectories
while IFS= read -r -d $'\0' file; do
targets+=("$file")
done < <(find integration_test -name "*.dart" -type f -print0)
# Loop through the array and run the flutter drive command for each target
for target in "${targets[#]}"
do
flutter drive \
--driver=test_driver/integation_test_driver.dart \
--target=$target
done
Run the integrationTestRunner.sh file with any methods:
Pressing the ▶️ button in that file (if you are in VS Code)
Running the script from command line ./integrationTestRunner.sh

Set Google Storage Bucket's default cache control

Is there any way to set Bucket's default cache control (trying to override the public, max-age=3600 in bucket level every time creating a new object)
Similar to defacl but set the cache control
If someone is still looking for an answer, one needs to set the metadata while adding the blob.
For those who want to update the metadata for all existing objects in the bucket, you can use setmeta from gsutil - https://cloud.google.com/storage/docs/gsutil/commands/setmeta
You just need to do the following :
gsutil setmeta -r -h "Cache-control:public, max-age=12345" gs://bucket_name
Using gsutil
-h: Allows you to specify certain HTTP headers
-r: Recursive
-m: To performing a sequence of gsutil operations that may run significantly faster.
gsutil -m setmeta -r -h "Cache-control:public, max-age=259200" gs://bucket-name
It is possible to write a Google Cloud Storage Trigger.
This function sets the Cache-Control metadata field for every new object in a bucket:
from google.cloud import storage
CACHE_CONTROL = "private"
def set_cache_control_private(data, context):
"""Background Cloud Function to be triggered by Cloud Storage.
This function changes Cache-Control meta data.
Args:
data (dict): The Cloud Functions event payload.
context (google.cloud.functions.Context): Metadata of triggering event.
Returns:
None; the output is written to Stackdriver Logging
"""
print('Setting Cache-Control to {} for: gs://{}/{}'.format(
CACHE_CONTROL, data['bucket'], data['name']))
storage_client = storage.Client()
bucket = storage_client.get_bucket(data['bucket'])
blob = bucket.get_blob(data['name'])
blob.cache_control = CACHE_CONTROL
blob.patch()
You also need a requirements.txt file for the storage import in the same directory. Inside the requirements there is the google-cloud-storage package:
google-cloud-storage==1.10.0
You have to deploy the function to a specific bucket:
gcloud beta functions deploy set_cache_control_private \
--runtime python37 \
--trigger-resource gs://<your_bucket_name> \
--trigger-event google.storage.object.finalize
For debugging purpose you can retrieve logs with gcloud command as well:
gcloud functions logs read --limit 50
I know that this is quite an old question and you're after a default action (which I'm not sure exists), but the below worked for me on a recent PHP project after much frustration:
$object = $bucket->upload($tempFile, [
'predefinedAcl' => "PUBLICREAD",
'name' => $destination,
'metadata' => [
'cacheControl' => 'Cache-Control: private, max-age=0, no-transform',
]
]);
Same can be applied in node:
const storage = new Storage();
const bucket = storage.bucket(BUCKET_NAME);
const blob = bucket.file(FILE_NAME);
const uploadProgress = new Promise((resolve, reject) => {
const blobStream = blob.createWriteStream();
blobStream.on('error', err => {
reject(err);
throw new Error(err);
});
blobStream.on('finish', () => {
resolve();
});
blobStream.end(file.buffer);
});
await uploadProgress;
if (isPublic) {
await blob.makePublic();
}
blob.setMetadata({ cacheControl: 'public, max-age=31536000' });
There is no way to specify a default cache control. It must be set when creating the object.
If you're using a python app, you can use the option "default_expiration" in your app.yaml to set a global default value for the Cache-Control header: https://cloud.google.com/appengine/docs/standard/python/config/appref
For example:
runtime: python27
api_version: 1
threadsafe: yes
default_expiration: "30s"

How can I run SQL on PostgreSQL RDS from Lambda function in Node.js?

I have this code in Lambda funcion:
sql="SELECT ...";
var pg = require('pg');
var connectionString = 'postgres://...';
var client = new pg.Client(connectionString);
client.connect();
var query = client.query(sql);
query.on('end', function() { client.end(); });
When I run from EC2, it works fine. When I run from Lambda I get Error: Cannot find module 'pg'
I am a super noob in Node JS, but I really wanted to try AWS Lambda. Here are the steps I took. I used a Ubuntu 14.04.
sudo apt-get update
sudo apt-get install nodejs
sudo apt-get install npm
sudo ln -s /usr/bin/nodejs /usr/bin/node
mkdir the_function && cd the_function
mkdir node_modules
npm install pg
******Now create a file index.js and paste the content of your funcion.
console.log('Loading S2 Function');
var pg = require("pg");
exports.handler = function(event, context) {
var conn = "pg://user:password#host:5432/bd_name";
var client = new pg.Client(conn);
client.connect();
var query = client.query("SELECT * FROM BLA WHERE ID = 1");
query.on("row", function (row, result) {
result.addRow(row);
});
query.on("end", function (result) {
var jsonString = JSON.stringify(result.rows);
var jsonObj = JSON.parse(jsonString);
console.log(jsonString);
client.end();
context.succeed(jsonObj);
});
};
******Now zip the contents of the_function folder (not the_function folder itself)
You can check the official sample from this AWS link: http://docs.aws.amazon.com/lambda/latest/dg/walkthrough-s3-events-adminuser-create-test-function-create-function.html
You can import easily only predifined libs to your lambda. For example You can use just boto3 and core for python, for java You can use just core. http://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html
You cannot import any additional libs in simple way.
You can try to use "hard way".
In this case You should save all necessary libraries to s3(or other place which You can access from lambda) and then copy to lambda environment (/tmp) and import it with the help of reflection.
Error: Cannot find module 'pg'
In my case I was just uploading index.js. We need to package 3rd party node modules as well.
create index.js (name may vary based on your handler name)
run npm install package
It is better you create package.json will all dependencies you need and run mpn install
Confirm node_modules folder is created in same directory.
Zip these contents (index.js and node_modules folder) and upload the zip.
You can upload directly or use S3.
For more details read their offcial doc - Creating a Deployment Package (Node.js)
Now I get: Unable to import module 'index': Error . Is my function must be called index.js
In my case I was zipping the entire directory instead of it's contents. So you need to really do -
zip -r deploymentpkg.zip ./*
instead of
zip -r deploymentpkg.zip folder