presto: Discovery server cannot get connect - coordinates

Recently I build presto with cluster mode, 1 coordinator & 1 worker, it works.
Then I repackage "presto-main-0.148.jar" without any change , and replace it to production environment, it doesn't work! Always get response with "No worker nodes available"
I search the Server.log and see below messages:
ERROR Discovery-0 io.airlift.discovery.client.CachingServiceSelector Cannot
connect to discovery server for refresh (collector/general): Lookup
of collector failed for
ht*p://10.3.2.33:18080/v1/service/collector/general
ERROR Discovery-0 io.airlift.discovery.client.CachingServiceSelector Cannot
connect to discovery server for refresh (presto/general): Lookup of
presto failed for ht*p://10.3.2.33:18080/v1/service/presto/general
INFO Discovery-1 io.airlift.discovery.client.CachingServiceSelector Discovery
server connect succeeded for refresh (collector/general)
INFO Discovery-2 io.airlift.discovery.client.CachingServiceSelector Discovery
server connect succeeded for refresh (presto/general)
So I guess discover server is not started,But I use command curl "h*tp://10.3.2.33:18080/v1/service/collector/general",
and get response below, and I also get coordinator status as 'ACTIVE'
{
"environment": "presto_**_flt",
"services": [
{
"id": "954e886d-7506-4f00-b954-eeab49209835",
"nodeId": "4c0f2596-7e6e-11e6-ae22-56b6b6499611",
"type": "presto",
"pool": "general",
"location": "/4c0f2596-7e6e-11e6-ae22-56b6b6499611",
"properties": {
"node_version": "a0e36ae",
"coordinator": "false",
"http": "h*tp://10.3.2.24:18080",
"http-external": "h*tp://10.3.2.24:18080",
"datasources": "hive,system"
}
},
{
"id": "6790b522-cd17-48ef-b077-e4e8fa97e310",
"nodeId": "4c0f2366-7e6e-11e6-ae22-56b6b6499611",
"type": "presto",
"pool": "general",
"location": "/4c0f2366-7e6e-11e6-ae22-56b6b6499611",
"properties": {
"node_version": "c34bef3-dirty",
"coordinator": "true",
"http": "h*tp://10.3.2.33:18080",
"http-external": "h*tp://10.3.2.33:18080",
"datasources": ""
}
}
]
}

I think this is because that you have two different node_version in these two services.
If you are repackaging presto-main or any other component, make sure you are using the same binaries on all the nodes.

Related

Azure DB for PostgreSQL - changes to log_line_prefix parameter not implemented

I have a General Purpose Single Server instance of Azure DB for PostgreSQL where I have installed the pgAudit plugin.
I am trying to add more data to the pgAudit session auditing entries by following the instructions on Microsoft's page and PostgreSQL's page and I tried to set up log_line_prefix in the following configurations:
t=%t c=%c a=%a u=%u d=%d r=%r% h=h% e=e c=%c
%t,%c,%a,%u,%d,%r,%h,%e,%c
%t%c%a%u%d%r%h%e%c
None of these have any effect on events collected. Here's the most of what an INSERT looks like:
{
"LogicalServerName": "postgresql4moi",
"SubscriptionId": "****",
"ResourceGroup": "OLC_Research",
"time": "2020-05-05T12:10:59Z",
"resourceId": "***",
"category": "PostgreSQLLogs",
"operationName": "LogEvent",
"properties": {
"prefix": "t=2020-05-05 12:10:59 UTC c=5eb157c4.5c a=DBeaver 7.0.1 - SQLEditor <testingScript.sql> u=system d=postgres r=****.234(4344)h=he=e c=5eb157c4.5c",
"message": "AUDIT: SESSION,6,1,WRITE,INSERT,,,\"INSERT INTO public.koko_table VALUES ('kokoMoko','kokoMoko')\",<none>",
"detail": "",
"errorLevel": "LOG",
"domain": "postgres-11",
"schemaName": "",
"tableName": "",
"columnName": "",
"datatypeName": ""
}
}
Is there something else I forgot to configure?
I event restarted the database after each attempt to set the parameter.
Thanks in advance.

CloudFormation OpenVPN

I want to configure OpenVPN using CloudFormation, I thought I can get the AMI ID from Market place and launch it, because I want to launch an instance with 10 connection, but unfortunately I am not able to get AMI ID from Market place.
How to get the AMI ID of OpenVPN Server with 10 supported connection ?
You can launch OpenVPN Server using CloudFormation using the following steps :
Use the following command to list the OpenVPN AMI ID
aws --region=ap-southeast-2 ec2 describe-images --owner=aws-marketplace --filters 'Name=name,Values=OpenVPN Access Server 2.7.5*'
The above command will give the following output
{
"Images": [
{
"VirtualizationType": "hvm",
"Hypervisor": "xen",
"RootDeviceType": "ebs",
"SriovNetSupport": "simple",
"OwnerId": "123",
"ImageId": "ami-01f26c6ea254596c5",
"Name": "OpenVPN Access Server 2.7.5-bbff26cd-b407-44a2-a7ef-70b8971391f1-ami-0c56f53c16ad84dcd.4",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"SnapshotId": "snap-0624e972dc64638ed",
"VolumeSize": 8,
"Encrypted": false,
"VolumeType": "standard",
"DeleteOnTermination": true
}
}
],
"EnaSupport": true,
"ImageLocation": "aws-marketplace/OpenVPN Access Server 2.7.5-bbff26cd-b407-44a2-a7ef-70b8971391f1-ami-0c56f53c16ad84dcd.4",
"ImageOwnerAlias": "aws-marketplace",
"ProductCodes": [
{
"ProductCodeId": "b4oaowtu943z36a9jxepql6gh",
"ProductCodeType": "marketplace"
}
],
"ImageType": "machine",
"Public": true,
"CreationDate": "2019-09-30T14:17:14.000Z",
"Description": "http://www.openvpn.net/",
"State": "available",
"RootDeviceName": "/dev/sda1",
"Architecture": "x86_64"
}, ...
.....
}]
}
To get more details about the image and OpenVPN Server you will have to search MarketPlace using the AMI-ID.
To use the image in CloudFormation you have to Subscribe to the Product over Market Place.
Once you have subscribed you can launch OpenVPN like other normal EC2 Instances using the AMI ID.

Hyperledger Explorer Error 12 UNIMPLEMENTED: service protos.Endorser

I am trying to run the Hypeledger Explorer for my blockchain network. I have followed the instructions almost word for word using the Hyperldger Explorer
But anytime I do the final call: ./start.sh --- I get a litany of errors
error: [client-utils.js]: sendPeersProposal - Promise is rejected: Error: 12 UNIMPLEMENTED: unknown service protos.Endorser
at new createStatusError (/home/ubuntu/bludev/blockchain-explorer/node_modules/grpc/src/client.js:64:15)
at /home/ubuntu/bludev/blockchain-explorer/node_modules/grpc/src/client.js:583:15
error: [Client.js]: Failed Installed Chaincodes Query. Error: Error: 12 UNIMPLEMENTED: unknown service protos.Endorser
at new createStatusError (/home/ubuntu/bludev/blockchain-explorer/node_modules/grpc/src/client.js:64:15)
at /home/ubuntu/bludev/blockchain-explorer/node_modules/grpc/src/client.js:583:15
...
And so on. For more info I am using
nodejs 6.9 and PostgreSQL 9.5. This is the way my config.json file looks:
{
"network-config": {
"org1": {
"name": "peerOrg1",
"mspid": "Org1MSP",
"peer1": {
"requests": "grpc://127.0.0.1:7051",
"events": "grpc://127.0.0.1:7053",
"server-hostname": "peer0.org1.example.com",
"tls_cacerts": "/home/ubuntu/bludev/fabric-samples/first-network/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt"
},
"admin": {
"key": "/home/ubuntu/bludev/hlcomposer/fabric-dev-servers/fabric-scripts/hlfv1/composer/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/keystore",
"cert": "/home/ubuntu/bludev/hlcomposer/fabric-dev-servers/fabric-scripts/hlfv1/composer/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/signcerts"
}
}
},
"host": "localhost",
"port": "3000",
"channel": "mychannel",
"keyValueStore": "/tmp/fabric-client-kvs",
"eventWaitTime": "30000",
"pg": {
"host": "12.109.99.233",
"port": "3000",
"database": "fabricexplorer",
"username": "postgres",
"passwd": "password1"
}}
The problem is your hyperledger network does not have any endorser in network.
try first-network sample from official fabric-samples folder, rebuild the explorer and then try again.

Azure Automation Registration Endpoint is corrupted when used to pull DSC configuration

For some reason, I keep getting these weird issues.....
In this case, I have a Key and Endpoint URL for the Automation Account stored as Secrets in a KeyVault (I don't know of a away to extract it natively from Automation Account using ARM).
I can extract these values perfectly and they they are published to the Template that runs a PowerShell extension to pull a DSC Configuration.
For example as seen as an Input deploying the Template:
"RegistrationUrl":"https://ase-agentservice-prod-1.azure-automation.net/accounts/e0799801-a8da-8934-b0f3-9a43191dd7e6"
However, I receive the following error (note the Url in the Error with 3 forward slashes)
"code": "VMExtensionProvisioningError",
"message": "VM has reported a failure when processing extension 'dscLcm'.
Error message: "DSC Configuration 'ConfigureLCMforAAPull' completed with error(s). Following are the first few: The attempt to 'get an action' for AgentId 11A5A267-6D00-11E7-B07F-000D3AE0FB1B from server URL https://ase-agentservice-prod-1.azure-automation.net///accounts/e0799801-a8da-8934-b0f3-9a43191dd7e6/Nodes(AgentId='11A5A267-6D00-11E7-B07F-000D3AE0FB1B')/GetDscAction failed with server error 'ResourceNotFound(404)'.
For further details see the server error message below or the DSC debug event log with ID 4339.
ServerErrorMessage:- 'No NodeConfiguration was found for the agent.'\"."
The Endpoint Url is passed as a Secure String. I tried passing it a normal string - Same problem.
The Key and Endpoint are feed into the Template as Parameters:
"dscKeySecret": {
"type": "securestring",
"metadata": {
"description": "Key for PowerShell DSC Configuration."
}
},
"dscUrlSecret": {
"type": "securestring",
"metadata": {
"description": "Url for PowerShell DSC Configuration."
}
},
These values are used to create a parameter to be passed to the next template that runs the VM Extension.
"extn-settings": {
"value": {
"configuration": {
"url": "[concat(variables('urls').dscScripts, '/', 'lcm-aa-pull', '/', 'lcm-aa-pull', '.zip')]",
"script": "[concat('lcm-aa-pull', '.ps1')]",
"function": "ConfigureLCMforAAPull"
},
"configurationArguments": {
"registrationKey": {
"username": "dsckeySecret",
"password": "[parameters('dscKeySecret')]"
},
"registrationUrl": "[parameters('dscUrlSecret')]",
"configurationMode": "ApplyAndMonitor",
"configurationModeFrequencyMins": 15,
"domain": "[variables('names').domain]",
"name": "dscLcm",
"nodeConfigurationName": "[variables('names').config.ad]",
"rebootNodeIfNeeded": true,
"refreshFrequencyMins": 30
},
"protectedSettings": null,
}
}
The next template receives the Parameters and used in the Properties of the VM's Resources section:
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.22",
"autoUpgradeMinorVersion": true,
"settings": {
"configuration": "[parameters('extn-settings').configuration]",
"configurationArguments": "[parameters('extn-settings').configurationArguments]"
},
"protectedSettings": "[parameters('extn-settings').protectedSettings]"
}
So why is the Url being corrupted with the the first '/' being changed to '///'?
I don't why the Endpoint Url has 3 x '/', but that wasn't the issue.... I wish I found the issue before I posted this question...
I found the Node Configuration Name was wrong with a spelling mistake (hang head in shame)
Thanks anyway!

How to make a entity creation?

I create my Instance on the CLOUD but when try to do a POST the data are not send to the VM, something is wrong with the data I use ?
I'm using Rest Client on Firefox.
This is the body of the code (Json) :
{
"contextElements": [
{
"type": "Room",
"isPattern": "false",
"id": "Room1",
"attributes": [
{
"name": "temperature",
"type": "float",
"value": "23"
},
{
"name": "pressure",
"type": "integer",
"value": "720"
}
]
}
],
"updateAction": "APPEND"
}
The URL is http://10.0.22x.6x:1026/NGSI10/updateContext and the headers are:
Content-Type: application/json
Accept: application/json
Note that you are sending your REST request to a private IP (10.0.22x.6x). However, I guess that you run your Firefox REST Client in a PC or laptop computer without direct connectivity to that IP.
The solution would be to allocate a public IP to the VM, then access to that public IP from your external REST Client. Note that you need the port 1026 opened in the security group associated to that VM (otherwise the cloud will block any attemp to connect to it from an external host).