Cardano Pre Production Testnet TrConnectError - cardano

Since the documention of the cardano developer portal is outdated (old testnet) i researched and now know about the new testnets documented here https://book.world.dev.cardano.org/environments.html and on Github.
I followed the tutorial on the documentation https://developers.cardano.org/docs/get-started/running-cardano about running the node for the testnet, but instead of using the deprecated files i used the new enviroment files for the pre production testnet.
Now i cant sync the network.
I get the following info over and over again:
TrConnectError (Just 127.0.0.1:1337) 3.72.231.105:30000 Network.Socket.connect: <socket: 29>: invalid argument (Invalid argument)
TrConnectionManagerCounters (ConnectionManagerCounters {fullDuplexConns = 0, duplexConns = 0, unidirectionalConns = 0, inboundConns = 0, outboundConns = 0})
TracePromoteColdFailed 50 0 3.72.231.105:30000 160.633570297628s Network.Socket.connect: <socket: 29>: invalid argument (Invalid argument)
TraceGovernorWakeup
TracePublicRootsRequest 100 1
TracePublicRootRelayAccessPoint [RelayAccessDomain "preprod-node.world.dev.cardano.org" 30000]
TracePublicRootResult "preprod-node.world.dev.cardano.org" [(3.72.231.105,60)]
TracePublicRootsResults (fromList []) 9 512s
console info from the node, same es in the text
I can get the sync status which looks like the first time running like this:
{
"block": 0,
"epoch": 0,
"era": "Byron",
"hash": "9ad7ff320c9cf74e0f5ee78d22a85ce42bb0a487d0506bf60cfb5a91ea4497d2",
"slot": 0,
"syncProgress": "0.01"
}
I tried it with the devnet, and preview testnet too - didn't work either.
Cardano node version (currently the newest):
cardano-node 1.35.3 - linux-x86_64 - ghc-8.10
git rev ea6d78c775d0f70dde979b52de022db749a2cc32
Does anyone know why this happens and how to fix it?

Run the node with "--host-addr 0.0.0.0". For example:
cardano-node run --config $HOME/cardano/preprod/config.json
--database-path $HOME/cardano/db
--socket-path $HOME/cardano/db/node.socket
--host-addr 0.0.0.0
--port 1337
--topology $HOME/cardano/preprod/topology.json

Related

How to deploy the kinesis-video-producer Docker image from AWS's own ECR to Fargate using CDK in TypeScript?

I'm trying to stand up a proof of concept that ingests an RTSP video stream into Kinesis Video. The provided documentation has a docker image all set up that seems to have everything I need to do this, hosted by AWS on 546150905175.dkr.ecr.us-west-2.amazonaws.com. What I am having trouble with, though, is getting that deployment (via an Amplify Custom category, in TypeScript CDK) to work.
I've tried different variations on
import * as iam from "#aws-cdk/aws-iam";
import * as ecs from "#aws-cdk/aws-ecs";
import * as ec2 from "#aws-cdk/aws-ec2";
const kinesisUserAccessKey = new iam.AccessKey(this, 'KinesisStreamUserAccessKey', {
user: kinesisStreamUser,
})
const servicePrincipal = new iam.ServicePrincipal('ecs-tasks.amazonaws.com');
const executionRole = new iam.Role(this, 'IngestVideoTaskDefExecutionRole', {
assumedBy: servicePrincipal,
managedPolicies: [
iam.ManagedPolicy.fromAwsManagedPolicyName('service-role/AmazonECSTaskExecutionRolePolicy'),
]
});
const taskDefinition = new ecs.FargateTaskDefinition(this, 'IngestVideoTaskDef', {
cpu: 512,
memoryLimitMiB: 1024,
executionRole,
})
const image = ecs.ContainerImage.fromRegistry('546150905175.dkr.ecr.us-west-2.amazonaws.com/kinesis-video-producer-sdk-cpp-amazon-linux:latest');
taskDefinition.addContainer('IngestVideoContainer', {
command: [
'gst-launch-1.0',
'rtspsrc',
`location="${locationParam.secretValue.toString()}"`,
'short-header=TRUE',
'!',
'rtph264depay',
'!',
'video/x-h264,',
'format=avc,alignment=au',
'!',
'kvssink',
`stream-name="${cfnStream.name}"`,
'storage-size=512',
`access-key="${kinesisUserAccessKey.accessKeyId}"`,
`secret-key="${kinesisUserAccessKey.secretAccessKey.toString()}"`,
`aws-region="${REGION}"`,
// `aws-region="${cdk.Aws.REGION}"`,
],
image,
logging: new ecs.AwsLogDriver({
streamPrefix: 'IngestVideoContainer',
}),
})
const service = new ecs.FargateService(this, 'IngestVideoService', {
cluster,
taskDefinition,
desiredCount: 1,
securityGroups: [
ec2.SecurityGroup.fromSecurityGroupId(this, 'DefaultSecurityGroup', SECURITY_GROUP_ID)
],
vpcSubnets: {
subnets: SUBNET_IDS.map(subnetId => ec2.Subnet.fromSubnetId(this, subnetId, subnetId)),
}
})
But it seems like regardless of what I do, an amplify push just stays in 'in progress' for like an hour until I go into the CloudFormation console and cancel the stack update, but deep in the my way to the ECS Console I managed to find an actual error message:
Resourceinitializationerror: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve ecr registry auth: service call has been retried 3 time(s): RequestError: send request failed caused by: Post "https://api.ecr.us-west-2.amazonaws.com/": dial tcp 52.94.177.118:443: i/o timeout
It seems to be some kind of networking issue, but I'm not sure how to proceed. Any assistance you can provide would be wonderful. Cheers!
Figured it out. For those stuck with similar issues, you have to give it an execution role with AmazonECSTaskExecutionRolePolicy, which I already edited above, and set assignPublicIp: true in the service.

Cannot update `max_wal_senders` parameter, always set to 1

I'm trying to increase the max_wal_senders param and no matter how or what I set it to, it always shows up as 1.
Updated postgresql.conf, only one instanced of max_wal_senders and it's set to 10.
I've also used alter system set max_wal_senders = 10 and verified it's showing as 10 in the auto.conf.
I've restarted the DB multiple times, and updating other config changes like max_connections are showing as updated when looking at show max_connections so I know the config I'm updating is the correct one.
In running select * from pg_settings where name = 'max_wal_senders';
Current value is 1, boot_value is set to 10, and the reset_value is set to 1.
It seems like it's getting reset or the changes just aren't being applied for some reason, but not having the issue with any other parameter. Anything I'm missing?
Should also be noted that I'm running Postgres though docker and my method for restarting postgres is simply restarting the docker container. (again, this works for other config changes, so not sure if it matters)
{
"select * from pg_settings where name = 'max_wal_senders'": [
{
"name" : "max_wal_senders",
"setting" : "1",
"unit" : null,
"category" : "Replication \/ Sending Servers",
"short_desc" : "Sets the maximum number of simultaneously running WAL sender processes.",
"extra_desc" : null,
"context" : "postmaster",
"vartype" : "integer",
"source" : "command line",
"min_val" : "0",
"max_val" : "262143",
"enumvals" : null,
"boot_val" : "10",
"reset_val" : "1",
"sourcefile" : null,
"sourceline" : null,
"pending_restart" : false
}
]}
In checking my docker-compose.yml, in the postgres command I'm setting cmax_wal_senders=1.
postgres -cwal_level=archive -carchive_mode=on -carchive_command="/usr/bin/wget wale/wal-push/%f -O -" -carchive_timeout=600 -ccheckpoint_timeout=700 -cmax_wal_senders=1
I've updated this to 10 though and have restarted the container and am still seeing 1.
postgres -cwal_level=archive -carchive_mode=on -carchive_command="/usr/bin/wget wale/wal-push/%f -O -" -carchive_timeout=600 -ccheckpoint_timeout=700 -cmax_wal_senders=10
The explanation can be seen in the output from pg_settings: the source of the setting is "command line". That means that the server was started with that explicit parameter value, e.g.
postgres -c max_wal_senders=1 -D datadir
That will override the setting in the configuration files.

How to debug session leaking or close all sessions in python MongoDB?

It is my first time to use MongoDB to manage an image dataset(~10 million images).
My environment is MongoDB 5.0.6 with PymongoDB 4.0.2 and Python 3.9.6 on Ubuntu 18.04.
My dataset is accessing PyMongoDB and then it is used to train a DNN in Pytorch. My code warns at the begining:
UserWarning: MongoClient opened before fork.
Create MongoClient only after forking.
See PyMongo's documentation for details: https://pymongo.readthedocs.io/en/stable/faq.html#is-pymongo-fork-safe
(I check this url and get nothing. I think I indeed recreate the client each time when the class is instantiaed)
After running for a while, my code crashs, and exit
Unable to add session ID ffd152cf-97d3-454a-882a-c6fc693e2985 - 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=
into the cache because the number of active sessions is too high,
full error:
{'ok': 0.0,
'errmsg': 'Unable to add session ID ffd152cf-97d3-454a-882a-c6fc693e2985 - 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= into the cache because the number of active sessions is too high',
'code': 261,
'codeName': 'TooManyLogicalSessions'}Unable to add session ID 85b35e6c-fc83-41d1-915b-83e6841c5467 - 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= into the cache because the number of active sessions is too high, full error: {'ok': 0.0, 'errmsg': 'Unable to add session ID 85b35e6c-fc83-41d1-915b-83e6841c5467 - 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= into the cache because the number of active sessions is too high',
'code': 261,
'codeName': 'TooManyLogicalSessions'}
As the error message, It seems I just opened too many sessions.
How can I check current active sessions in PyMongoDB? How can I close all sessions? Or how can I further debug this problem?
Thank you very much! My active sessions' number and my dataset code sample is offered as shown below:
1. Check the active sessions
My active sessions (db.serverStatus().connections) are indeed increasing:
{
current: 6,
available: 51194,
totalCreated: 199,
active: 2,
threaded: 6,
exhaustIsMaster: 0,
exhaustHello: 1,
awaitingTopologyChanges: 1
}
to
{
current: 156,
available: 51044,
totalCreated: 361,
active: 54,
threaded: 156,
exhaustIsMaster: 0,
exhaustHello: 51,
awaitingTopologyChanges: 51
}
and 5 minutes later, my program broke. (I don't know how to get this result from PymongoDB instead of mongosh. So I can only check it manually periodly. At this moment It seems there are still 51044 availiable, and should not be consumed up so quickly only in 5 minutes.)
2. my sample dataset code
like:
from torch.utils.data import Dataset
from pymongo import MongoClient
class MongoDataset(Dataset):
def __init__(self, dbName):
client = MongoClient(host = '127.0.0.1', port = 27017, connect=False)
db = client[dbName]
self.dataTable = db["dataTable"]
def getData(self, _id):
return self.dataTable.find_one({"_id" : _id})
def __len__(self):
return self.dataTable.estimated_document_count()
This class will be automatically forked, and recreated.

Passing environment variables to NOW

I am trying to pass firebase environment variables for deployment with now.
I have encoded these variables manually with base64 and added them to now with the following command:
now secrets add firebase_api_key_dev "mybase64string"
The encoded string was placed within speech marks ""
These are in my CLI tool and I can see them all using the list command:
now secrets ls
> 7 secrets found under project-name [499ms]
name created
firebase_api_key_dev 6d ago
firebase_auth_domain_dev 6d ago
...
In my firebase config, I am using the following code:
const config = {
apiKey: Buffer.from(process.env.FIREBASE_API_KEY, "base64").toString(),
authDomain: Buffer.from(process.env.FIREBASE_AUTH_DOMAIN,"base64").toString(),
...
}
In my now.json file I have the following code:
{
"env": {
"FIREBASE_API_KEY": "#firebase_api_key_dev",
"FIREBASE_AUTH_DOMAIN": "#firebase_auth_domain_dev",
...
}
}
Everything works fine in my local environment (when I run next) as I also have a .env file with these variables, yet when I deploy my code, I get the following error in my now console:
TypeError [ERR_INVALID_ARG_TYPE]: The first argument must be one of type string, Buffer, ArrayBuffer, Array, or Array-like Object. Received type undefined
Does this indicate that my environment variables are not being read? What's the issue here? It looks like they don't exist at all
The solution was to replace my existing now.json with:
{
"build":{
"env": {
"FIREBASE_API_KEY": "#firebase_api_key",
"FIREBASE_AUTH_DOMAIN": "#firebase_auth_domain",
"FIREBASE_DATABASE_URL": "#firebase_database_url",
"FIREBASE_PROJECT_ID": "#firebase_project_id",
"FIREBASE_STORAGE_BUCKET": "#firebase_storage_bucket",
"FIREBASE_MESSAGING_SENDER_ID": "#firebase_messaging_sender_id",
"FIREBASE_APP_ID": "#firebase_app_id",
"FIREBASE_API_KEY_DEV": "#firebase_api_key_dev",
"FIREBASE_AUTH_DOMAIN_DEV": "#firebase_auth_domain_dev",
"FIREBASE_DATABASE_URL_DEV": "#firebase_database_url_dev",
"FIREBASE_PROJECT_ID_DEV": "#firebase_project_id_dev",
"FIREBASE_STORAGE_BUCKET_DEV": "#firebase_storage_bucket_dev",
"FIREBASE_MESSAGING_SENDER_ID_DEV": "#firebase_messaging_sender_id_dev",
"FIREBASE_APP_ID_DEV": "#firebase_app_id_dev"
}
}
}
I was missing the build header.
I had to contact ZEIT support to help me identify this issue.

How to integrate PostgreSQL with Corda 3.0 or with Corda 4.0?

I tried to configure PostgreSQL as node's database in Corda 3.0 and in Corda 4.0.
I have added following things in build.gradle file. (Testdb1 is Database name. I have tried with postgres also)
node{
...
// this part i have added
extraConfig = [
jarDirs: ['path'],
'dataSourceProperties': [
'dataSourceClassName': 'org.postgresql.ds.PGSimpleDataSource',
'"dataSource.url"' : 'jdbc:postgresql://127.0.0.1:5432/Testdb1',
'"dataSource.user"' : 'postgres',
'"dataSource.password"': 'admin#123'
],
'database': [
'transactionIsolationLevel': 'READ_COMMITTED'
]
]
// till here
}
this part in reference.conf file
dataSourceProperties = {
dataSourceClassName = org.postgresql.ds.PGSimpleDataSource
dataSource.url = "jdbc:postgresql://127.0.0.1:5432/Testdb1"
dataSource.user = postgres
dataSource.password = "admin#123"
}
database = {
transactionIsolationLevel = "READ_COMMITTED"
}
jarDirs = ["path"]
I got the follwing Error while deploying the nodes:
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':java-source:deployNodes'.
In node-info-gen.log file it showed the CAPSULE EXCEPTION. Then I updated my JDK to 8u191 but still got the same error.
I have go-through the followings to get the things done. One can get reference from here.
https://docs.corda.net/node-database.html ,
https://github.com/corda/corda/issues/4037 ,
How can the Corda node be extended to work with databases other than H2?
You need to add those properties in node.conf in each of your corda nodes. After you do "deployNodes"
After you add these properties in node.conf file , just run the corda jar . It will automatically start . But before that you need to create the tables (The migration to other DBs is already provided in corda documentation)
I have added following things in .conf file of each node and one referene.conf file. I have given all the privileges for the user postgres which are mentioned in corda documentation.
https://docs.corda.r3.com/node-database.html
(previously I used postgresql-42.2.5.jar file but that didn't work so I used one downgrade version of it postgresql-42.1.4.jar.
one can download jar files from https://jdbc.postgresql.org/download.html )
After deployed Nodes successfully add following things:
dataSourceProperties = {
dataSourceClassName = org.postgresql.ds.PGSimpleDataSource
dataSource.url = "jdbc:postgresql://127.0.0.1:5432/Testdb1"
dataSource.user = postgres
dataSource.password = "admin#123"
}
database = {
transactionIsolationLevel = "READ_COMMITTED"
}
jarDirs = ["path"]
(path = jar file's location) after adding this configuration run file called runnodes.bat