Error connecting to environment 1 Org Local Fabric: Error querying channels: 14 UNAVAILABLE: failed to connect to all addresses - visual-studio-code

I am unable to run my ibm evote blockchain application in hyperledger faric.I am using IBM Evote in VS Code (v1.39) in ubuntu 16. When I start my local fabric (1 org local fabric), I am facing above error.
following is my local_fabric_connection.json file code
{
"name": "local_fabric",
"version": "1.0.0",
"client": {
"organization": "Org1",
"connection": {
"timeout": {
"peer": {
"endorser": "300"
},
"orderer": "300"
}
}
},
"organizations": {
"Org1": {
"mspid": "Org1MSP",
"peers": [
"peer0.org1.example.com"
],
"certificateAuthorities": [
"ca.org1.example.com"
]
}
},
"peers": {
"peer0.org1.example.com": {
"url": "grpc://localhost:17051"
}
},
"certificateAuthorities": {
"ca.org1.example.com": {
"url": "http://localhost:17054",
"caName": "ca.org1.example.com"
}
}
}
and following is the snapshot

Based off your second image it doesn't look like your 1 Org Local Fabric started properly in the first place (you have no gateways and for some reason your wallets aren't grouped together!).
If you teardown your 1 Org Local Fabric then start it again hopefully it'll work.

Related

Getting Kafka Connect JMX metrics reporting into Datadog

I am working won a project involving Kafka Connect. We have a Kafka Connect cluster running on Kubernetes with some Snowflake connectors already spun up and working. The part we are having issues with now is trying to get the JMX metrics from the Kafka Connect cluster to report in Datadog. From my understanding of the Docs (https://docs.confluent.io/home/connect/monitoring.html#using-jmx-to-monitor-kconnect) the workers are already emitting metrics by default and we just need to find a way to get it reported to Datadog.
In our K8 Configmap we have these values set:
CONNECT_KAFKA_JMX_PORT: "9095"
KAFKA_JMX_PORT: "9095"
JMX_PORT: "9095"
I have included this launch script where we are setting the KAFKA_JMX_PORT env var:
export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=<redacted> -Dcom.sun.management.jmxremote.rmi.port=${JMX_PORT}"
I’ve been looking online and all over Stackoverflow and haven’t actually seen an example of people getting JMX metrics reporting to Datadog and standing up a dashboard there so I was wondering if anyone had experience with this.
Firstly, your Datadog agents need to have Java/JMX integration.
Secondly, use Datadog JMX integration with auto-discovery, where kafka-connect must match the container name.
annotations:
ad.datadoghq.com/kafka-connect.check_names: '["jmx"]'
ad.datadoghq.com/kafka-connect.init_configs: '[{}]'
ad.datadoghq.com/kafka-connect.instances: |
[
{
"host": "%%host%%",
"port": 9095,
"conf": [
{
"include": {
"domain": "kafka.connect",
"type": "connector-task-metrics",
"bean_regex": [
"kafka.connect:type=connector-task-metrics,connector=.*,task=.*"
],
"attribute": {
"batch-size-max": {
"alias": "jmx.kafka.connect.connector.batch_size_max"
},
"status": {
"metric_type": "gauge",
"alias": "jmx.kafka.connect.connector.status",
"values": {
"running":0,
"paused":1,
"failed":2,
"destroyed":3,
"unassigned":-1
}
},
"batch-size-avg": {
"alias": "jmx.kafka.connect.connector.batch_size_avg"
},
"offset-commit-avg-time-ms": {
"alias": "jmx.kafka.connect.connector.offset_commit_avg_time"
},
"offset-commit-max-time-ms": {
"alias": "jmx.kafka.connect.connector.offset_commit_max_time"
},
"offset-commit-failure-percentage": {
"alias": "jmx.kafka.connect.connector.offset_commit_failure_percentage"
}
}
}
},
{
"include": {
"domain": "kafka.connect",
"type": "source-task-metrics",
"bean_regex": [
"kafka.connect:type=source-task-metrics,connector=.*,task=.*"
],
"attribute": {
"source-record-poll-rate": {
"alias": "jmx.kafka.connect.task.source_record_poll_rate"
},
"source-record-write-rate": {
"alias": "jmx.kafka.connect.task.source_record_write_rate"
},
"poll-batch-avg-time-ms": {
"alias": "jmx.kafka.connect.task.poll_batch_avg_time"
},
"source-record-active-count-avg": {
"alias": "jmx.kafka.connect.task.source_record_active_count_avg"
},
"source-record-write-total": {
"alias": "jmx.kafka.connect.task.source_record_write_total"
},
"source-record-poll-total": {
"alias": "jmx.kafka.connect.task.source_record_poll_total"
}
}
}
}
]
}
]

Attach disk to a virtual machine with vSphere REST API

I need your help.
I created a virtual machine without disks via the vsphere REST API. Thats works really nice.
Now I want to attach an existing vmdk file to the virtual machine via vSphere Rest API.
I call this URL with a POST request: https://{{vc}}/rest/vcenter/vm/vm-9550/hardware/disk
And this Payload:
{
"spec": {
"backing": {
"type": "VMDK_FILE",
"vmdk_file": "[DS-MSD-DATA-NFS001] ISOs/Linux/centos-8.vmdk"
},
"type": "SCSI",
"scsi": {
"bus": 0,
"unit": 3
}
}
}
I got this error:
{
"type": "com.vmware.vapi.std.errors.invalid_argument",
"value": {
"error_type": "INVALID_ARGUMENT",
"messages": [
{
"args": [],
"default_message": "Invalid configuration for device '0'.",
"id": "vmsg.InvalidDeviceSpec.summary"
},
{
"args": [],
"default_message": "Device: VirtualDisk.",
"id": "vmsg.com.vmware.vim.vpxd.vpx.vmprov.DeviceStr"
}
]
}
}
I hope you can help me.
Cheers,
Etroska
Found the error. I had no SCSI Controller attached to the virtual machine.

Azure REST API does not return encryption settings for Virtual Machine

I have a 16.04-LTS Ubuntu Virtual Machine in my Azure account and I am trying Azure Disk Encryption for this virtual machine making use of this azure cli sample script. On running the encryption script, the azure portal shows its OS disk is encrypted. There is Enabled under Encryption header.
However, the Azure REST API (api link) for getting information about the virtual machine does not return the encryptionSettings under properties.storageProfile.osDisk. I tried both Model View and Model View and Instance View for the api-version 2017-03-30 as well as 2017-12-01. Here is the partial response from the API:
{
"name": "ubuntu",
"properties": {
"osProfile": {},
"networkProfile": {},
"storageProfile": {
"imageReference": {
"sku": "16.04-LTS",
"publisher": "Canonical",
"version": "latest",
"offer": "UbuntuServer"
},
"osDisk": {
"name": "ubuntu-OsDisk",
"diskSizeGB": 30,
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id": "..."
},
"caching": "ReadWrite",
"createOption": "FromImage",
"osType": "Linux"
},
"dataDisks": []
},
"diagnosticsProfile": {},
"vmId": "",
"hardwareProfile": {
"vmSize": "Standard_B1s"
},
"provisioningState": "Succeeded"
},
"location": "eastus",
"type": "Microsoft.Compute/virtualMachines",
"id": ""
}
But for my other encrypted windows virtual machine, I get the correct response which contains encryptionSettings in properties.storageProfile.osDisk:
{
"name": "win1",
"properties": {
"osProfile": {},
"networkProfile": {},
"storageProfile": {
"imageReference": {
"sku": "2016-Datacenter-smalldisk",
"publisher": "MicrosoftWindowsServer",
"version": "latest",
"offer": "WindowsServer"
},
"osDisk": {
"name": "win1_OsDisk_1",
"diskSizeGB": 31,
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id": "..."
},
"encryptionSettings": {
"diskEncryptionKey": {
"secretUrl": "...",
"sourceVault": {
"id": "..."
}
},
"keyEncryptionKey": {
"keyUrl": "...",
"sourceVault": {
"id": "..."
}
},
"enabled": true
},
"caching": "ReadWrite",
"createOption": "FromImage",
"osType": "Windows"
},
"dataDisks": []
},
"diagnosticsProfile": {},
"vmId": "...",
"hardwareProfile": {
"vmSize": "Standard_B1s"
},
"provisioningState": "Succeeded"
},
"location": "eastus",
"type": "Microsoft.Compute/virtualMachines",
"id": "..."
}
Why is the Virtual Machine Get API not returning the encryptionSettings for some VMs? Any help would be greatly appreciated.
I create VM using following command.
az vm create \
--resource-group shuivm \
--name shuivm \
--image Canonical:UbuntuServer:16.04-LTS:latest \
--admin-username azureuser \
--generate-ssh-keys
When I use the following API, I could get encryption setting.
https://management.azure.com/subscriptions/**********/resourceGroups/shuivm/providers/Microsoft.Compute/virtualMachines/shuivm?api-version=2017-03-30"
Note: When OS is encrypted successful, I could use API to get encryption setting.
This is because there are two types of at-rest disk encryption for Azure VMs and they are not reported in the same part of the Azure Management API:
Server-Side Encryption: that you can see in the encryptionSettings section of the VM/compute API when you get a vm details. It will show whether you are encypting with a customer managed key or a platform managed key
ADE: Azure Disk Encryption is actually a VM extension and so you can find it in the VM Extension API instead.
see: https://learn.microsoft.com/en-us/rest/api/compute/virtualmachineextensions/list

Specify ECR image instead of S3 file in Cloud Formation Elastic Beanstalk template

I'd like to reference an EC2 Container Registry image in the Elastic Beanstalk section of my Cloud Formation template. The sample file references an S3 bucket for the source bundle:
"applicationVersion": {
"Type": "AWS::ElasticBeanstalk::ApplicationVersion",
"Properties": {
"ApplicationName": { "Ref": "application" },
"SourceBundle": {
"S3Bucket": { "Fn::Join": [ "-", [ "elasticbeanstalk-samples", { "Ref": "AWS::Region" } ] ] },
"S3Key": "php-sample.zip"
}
}
}
Is there any way to reference an EC2 Container Registry image instead? Something like what is available in the EC2 Container Service TaskDefinition?
Upload a Dockerrun file to S3 in order to do this. Here's an example dockerrun:
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "my-bucket",
"Key": "mydockercfg"
},
"Image": {
"Name": "quay.io/johndoe/private-image",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080:80"
}
],
"Volumes": [
{
"HostDirectory": "/var/app/mydb",
"ContainerDirectory": "/etc/mysql"
}
],
"Logging": "/var/log/nginx"
}
Use this file as the s3 key. More info is available here.

Creating a Azure VM using Azure Resource Manger API

I am using following azure rest api to create a virtual machine in Azure Resource Manager mode
PUT
https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Compute/virtualMachines/{vm-name}?validating={true|false}&api-version={api-version}
The virtual machine gets created and it remains in Creating state as i can see the status or new portal of azure
I am able to RDP into the machine.
But after a login it fails.
What could be the reason?
Any thing i am missing?
Note : I am creating a virtual machine using a image.
Request Json :
{
"properties": {
"hardwareProfile": {
"vmSize": "Standard_A0"
},
"storageProfile": {
"osDisk": {
"osType": "Windows",
"name": "goldentemplate-osDisk",
"createOption": "FromImage",
"image": {
"uri": "https://storagename.blob.core.windows.net/system/Microsoft.Compute/Images/mytemplates/goldentemplate-osDisk.vhd"
},
"vhd": {
"uri": "https://storagename.blob.core.windows.net/vmcontainersagar/sagargoden.vhd"
},
"caching": "None"
}
,
"dataDisks": []
},
"osProfile": {
"computerName": "sagarHostVM",
"adminUsername": "itadmin",
"adminPassword": "Micr0s0ft12!#"
},
"networkProfile": {
"networkInterfaces": [
{
"properties": {},
"id": "/subscriptions/subscritpionid/resourceGroups/harigroup/providers/Microsoft.Network/networkInterfaces/sagarhostnic"
}
]
}
},
"name": "sagarHostVM",
"type": "Microsoft.Compute/virtualMachines",
"location": "WestUs",
"tags": {}
}