How could a spring-boot application determine if it is running on cloud foundry? - mongodb

I'm writting a micro service with spring-boot. The db is mongodb. The service works perfectly in my local environment. But after I deployed it to the cloud foundry it doesn't work. The reason is connecting mongodb time out.
I think the root cause is the application doesn't know it is running on cloud. Because it still connecting 127.0.0.1:27017, but not the redirected port.
How could it know it is running on cloud? Thank you!
EDIT:
There is a mongodb instance bound to the service. And when I checked the environment information, I got following info:
{
"VCAP_SERVICES": {
"mongodb": [
{
"credentials": {
"hostname": "10.11.241.1",
"ports": {
"27017/tcp": "43417",
"28017/tcp": "43135"
},
"port": "43417",
"username": "xxxxxxxxxx",
"password": "xxxxxxxxxx",
"dbname": "gwkp7glhw9tq9cwp",
"uri": "xxxxxxxxxx"
},
"syslog_drain_url": null,
"volume_mounts": [],
"label": "mongodb",
"provider": null,
"plan": "v3.0-container",
"name": "mongodb-business-configuration",
"tags": [
"mongodb",
"document"
]
}
]
}
}
{
"VCAP_APPLICATION": {
"cf_api": "xxxxxxxxxx",
"limits": {
"fds": 16384,
"mem": 1024,
"disk": 1024
},
"application_name": "mock-service",
"application_uris": [
"xxxxxxxxxx"
],
"name": "mock-service",
"space_name": "xxxxxxxxxx",
"space_id": "xxxxxxxxxx",
"uris": [
"xxxxxxxxxx"
],
"users": null,
"application_id": "xxxxxxxxxx",
"version": "c7569d23-f3ee-49d0-9875-8e595ee76522",
"application_version": "c7569d23-f3ee-49d0-9875-8e595ee76522"
}
}
From my understanding, I think my spring-boot service should try to connect the port 43417 but not 27017, right? Thank you!

Finally I found the reason is I didn't specify the profile. After adding following code in my manifest.yml it works:
env:
SPRING_PROFILES_ACTIVE: cloud

Related

Connect strapi with mongoose to a MongoDb (mLab)

I tried to connect Strapi to mLab with this database.js config but it doesn't work. I get the error :
ConnectorError: connector "strapi-hook-mongoose" not found: Cannot find module 'strapi-connector-strapi-hook-mongoose'
Here is my database.js config file :
{
"defaultConnection": "default",
"connections": {
"default": {
"connector": "strapi-hook-mongoose",
"settings": {
"database": "strapi-test",
"host": "ds131914.mlab.com",
"srv": false,
"port": "31914",
"username": "root",
"password": "root010101"
},
"options": {
"authenticationDatabase": "strapi-test"
}
}
}
}
What should I do ?
After some search, it appers to me that this database.js config was from an old tutorial (this one). So to solve this probleme, you first need to install npm i -S strapi-connector-mongoose in order to install the right connecter.
Now, you need to change you database.js config for the desire environement. In my case, it was production. So edit config/environement/production/database.js like this :
{
"defaultConnection": "default",
"connections": {
"default": {
"connector": "mongoose",
"settings": {
"client": "mongo",
"host": "ds131914.mlab.com",
"port": "31914",
"srv": false,
"database": "strapi-test",
"username": "root",
"password": "root010101"
},
"options": {
"authenticationDatabase": "strapi-test",
"ssl": false
}
}
}
}
Like this, it should work !

AWS ECS Task Definition: Unknown parameter in volumes[0]: "dockerVolumeConfiguration", must be one of: name, host

I am trying to run Wazuh/Wazuh docker container on ECS. I was able to register task definition and launch container using Terraform. However, I am facing an issue with "Volume"(Data Volume) while registering tak definition using AWS CLI command.
Command: aws ecs --region eu-west-1 register-task-definition --family hids --cli-input-json file://task-definition.json
Error:
ParamValidationError: Parameter validation failed:
Unknown parameter in volumes[0]: "dockerVolumeConfiguration", must be one of: name, host
2019-08-29 07:31:59,195 - MainThread - awscli.clidriver - DEBUG - Exiting with rc 255
{
"containerDefinitions": [
{
"portMappings": [
{
"hostPort": 514,
"containerPort": 514,
"protocol": "udp"
},
{
"hostPort": 1514,
"containerPort": 1514,
"protocol": "udp"
},
{
"hostPort": 1515,
"containerPort": 1515,
"protocol": "tcp"
},
{
"hostPort": 1516,
"containerPort": 1516,
"protocol": "tcp"
},
{
"hostPort": 55000,
"containerPort": 55000,
"protocol": "tcp"
}
],
"image": "wazuh/wazuh",
"essential": true,
"name": "chids",
"cpu": 1600,
"memory": 1600,
"mountPoints": [
{
"containerPath": "/var/ossec/data",
"sourceVolume": "ossec-data"
},
{
"containerPath": "/etc/filebeat",
"sourceVolume": "filebeat_etc"
},
{
"containerPath": "/var/lib/filebeat",
"sourceVolume": "filebeat_lib"
},
{
"containerPath": "/etc/postfix",
"sourceVolume": "postfix"
}
]
}
],
"volumes": [
{
"name": "ossec-data",
"dockerVolumeConfiguration": {
"scope": "shared",
"driver": "local",
"autoprovision": true
}
},
{
"name": "filebeat_etc",
"dockerVolumeConfiguration": {
"scope": "shared",
"driver": "local",
"autoprovision": true
}
},
{
"name": "filebeat_lib",
"dockerVolumeConfiguration": {
"scope": "shared",
"driver": "local",
"autoprovision": true
}
},
{
"name": "postfix",
"dockerVolumeConfiguration": {
"scope": "shared",
"driver": "local",
"autoprovision": true
}
}
]
}
I tried by adding "host" parameter(however it supports Bind Mounts only). But got the same error.
"volumes": [
{
"name": "ossec-data",
"host": {
"sourcePath": "/var/ossec/data"
},
"dockerVolumeConfiguration": {
"scope": "shared",
"driver": "local",
"autoprovision": true
}
}
]
ECS should register the task definition having 4 Data Volumes and associated mount points.
Got the issue.
Removed "dockerVolumeConfiguration" parameter from "Volume" configuration and it worked.
"volumes": [
{
"name": "ossec-data",
"host": {
"sourcePath": "/ecs/ossec-data"
}
},
{
"name": "filebeat_etc",
"host": {
"sourcePath": "/ecs/filebeat_etc"
}
},
{
"name": "filebeat_lib",
"host": {
"sourcePath": "/ecs/filebeat_lib"
}
},
{
"name": "postfix",
"host": {
"sourcePath": "/ecs/postfix"
}
}
]
Can you check on your version of awscli?
aws --version
According to all the documentation, your first task definition should work fine and I tested it locally without any issues.
It might be that you are using an older aws cli version where the syntax was different or parameters were different at the time.
Could you try updating your aws cli to the latest version and try again?
--
Some additional info I found:
Checking on the aws ecs cli command, they added docker volume configuration as part of the cli in v1.80
The main aws-cli releases updates periodically to update the commands but they don't provide much info on what specific versions of each command is changed:
https://github.com/aws/aws-cli/blob/develop/CHANGELOG.rst
If you update your aws-cli version things should work

Azure REST API does not return encryption settings for Virtual Machine

I have a 16.04-LTS Ubuntu Virtual Machine in my Azure account and I am trying Azure Disk Encryption for this virtual machine making use of this azure cli sample script. On running the encryption script, the azure portal shows its OS disk is encrypted. There is Enabled under Encryption header.
However, the Azure REST API (api link) for getting information about the virtual machine does not return the encryptionSettings under properties.storageProfile.osDisk. I tried both Model View and Model View and Instance View for the api-version 2017-03-30 as well as 2017-12-01. Here is the partial response from the API:
{
"name": "ubuntu",
"properties": {
"osProfile": {},
"networkProfile": {},
"storageProfile": {
"imageReference": {
"sku": "16.04-LTS",
"publisher": "Canonical",
"version": "latest",
"offer": "UbuntuServer"
},
"osDisk": {
"name": "ubuntu-OsDisk",
"diskSizeGB": 30,
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id": "..."
},
"caching": "ReadWrite",
"createOption": "FromImage",
"osType": "Linux"
},
"dataDisks": []
},
"diagnosticsProfile": {},
"vmId": "",
"hardwareProfile": {
"vmSize": "Standard_B1s"
},
"provisioningState": "Succeeded"
},
"location": "eastus",
"type": "Microsoft.Compute/virtualMachines",
"id": ""
}
But for my other encrypted windows virtual machine, I get the correct response which contains encryptionSettings in properties.storageProfile.osDisk:
{
"name": "win1",
"properties": {
"osProfile": {},
"networkProfile": {},
"storageProfile": {
"imageReference": {
"sku": "2016-Datacenter-smalldisk",
"publisher": "MicrosoftWindowsServer",
"version": "latest",
"offer": "WindowsServer"
},
"osDisk": {
"name": "win1_OsDisk_1",
"diskSizeGB": 31,
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id": "..."
},
"encryptionSettings": {
"diskEncryptionKey": {
"secretUrl": "...",
"sourceVault": {
"id": "..."
}
},
"keyEncryptionKey": {
"keyUrl": "...",
"sourceVault": {
"id": "..."
}
},
"enabled": true
},
"caching": "ReadWrite",
"createOption": "FromImage",
"osType": "Windows"
},
"dataDisks": []
},
"diagnosticsProfile": {},
"vmId": "...",
"hardwareProfile": {
"vmSize": "Standard_B1s"
},
"provisioningState": "Succeeded"
},
"location": "eastus",
"type": "Microsoft.Compute/virtualMachines",
"id": "..."
}
Why is the Virtual Machine Get API not returning the encryptionSettings for some VMs? Any help would be greatly appreciated.
I create VM using following command.
az vm create \
--resource-group shuivm \
--name shuivm \
--image Canonical:UbuntuServer:16.04-LTS:latest \
--admin-username azureuser \
--generate-ssh-keys
When I use the following API, I could get encryption setting.
https://management.azure.com/subscriptions/**********/resourceGroups/shuivm/providers/Microsoft.Compute/virtualMachines/shuivm?api-version=2017-03-30"
Note: When OS is encrypted successful, I could use API to get encryption setting.
This is because there are two types of at-rest disk encryption for Azure VMs and they are not reported in the same part of the Azure Management API:
Server-Side Encryption: that you can see in the encryptionSettings section of the VM/compute API when you get a vm details. It will show whether you are encypting with a customer managed key or a platform managed key
ADE: Azure Disk Encryption is actually a VM extension and so you can find it in the VM Extension API instead.
see: https://learn.microsoft.com/en-us/rest/api/compute/virtualmachineextensions/list

Replicating a remote database from Bluemix

Database URL:
https://$USERNAME:$PASSWORD#$REMOTE_USERNAME.Cloudant.com/$DATABASE_NAME
What is the value of $USERNAME、$PASSWORD、$REMOTE_USERNAME?
The current Cloudant account:
VCAP_SERVICES
{
"cloudantNoSQLDB": [
{
"credentials": {
"username": "c39cexxx-bluemix",
"password": "xxxxxxx",
"host": "c39cexxx-bluemix.cloudant.com",
"port": 443,
"url": "https://c39cexxx-bluemix:xxxxxxx#c39cexxx-bluemix.cloudant.com"
},
}
]
}
The other Cloudant accounts in a database
VCAP_SERVICES
{
"cloudantNoSQLDB": [
{
"credentials": {
"username": "f39c4xxx-bluemix",
"password": "xxxxxxx",
"host": "f39c4xxx-bluemix.cloudant.com",
"port": 443,
"url": "https://f39c4xxx-bluemix:xxxxxxx#f39c4xxx- bluemix.cloudant.com"
},
}
]
}
Please give an example of replicating a remote database from Bluemix.
If you want to replicate from a remote target into a local database, your Database URL will be:
https://$remote_username:$remote_password#$remote_username.cloudant.com/$remote_database
E.g.
https://f39c45g0-bluemix:0ebdc6c7#f39c45g0-bluemix.cloudant.com/the_remote_database
You can find more information here: https://developer.ibm.com/clouddataservices/cloudant-replication/
NOTE: I'm assuming you didn't post your actual credentials, if you did you should at least change your passwords.

"Error getting chaincode package bytes" when deploying chaincode on hyperledger via REST

I'm trying to deploy chaincode on hyperledger (Bluemix service) via POST/REST to
/chaincode
QuerySpec
{ "jsonrpc": "2.0", "method": "deploy", "params": { "type": 1,
"chaincodeID": { "path":
"https://github.com/romeokienzler/learn-chaincode/tree/master/finished"
}, "ctorMsg": { "function": "init", "args": [ "hi there" ] },
"secureContext": "user_type1_0" }, "id": 1 }
I've also tried those links
https://github.com/romeokienzler/learn-chaincode/blob/master/finished/chaincode_finished?raw=true
https://raw.githubusercontent.com/romeokienzler/learn-chaincode/master/finished/chaincode_finished.go
I always get
{ "jsonrpc": "2.0", "error": {
"code": -32001,
"message": "Deployment failure",
"data": "Error when deploying chaincode: Error getting chaincode package bytes: Error getting code 'go get' failed with error: 'exit
status 1'\npackage
github.com/romeokienzler/learn-chaincode/tree/master/finished: cannot
find package
'github.com/romeokienzler/learn-chaincode/tree/master/finished' in any
of:\n\t/usr/local/go/src/github.com/romeokienzler/learn-chaincode/tree/master/finished
(from
$GOROOT)\n\t/go/usercode/552962906/src/github.com/romeokienzler/learn-chaincode/tree/master/finished
(from
$GOPATH)\n\t/go/src/github.com/romeokienzler/learn-chaincode/tree/master/finished\n"
}, "id": 1 }
Any idea?
Considering that you are playing with Bluemix service, I assume you are following "Implementing your first chain code tutorial"
If your forked repository you will see instructions to use branch v1.0 for Bluemix Blockchain Services (link) IBM BMX Service is (still) using Fabric v0.5.
Once you have Registered with one of the available Enroll ID you should be able to deploy your chaincode using DeploySpec (note the path: "https://github.com/romeokienzler/learn-chaincode/tree/v1.0/finished")
{
"jsonrpc": "2.0",
"method": "deploy",
"params": {
"type": 1,
"chaincodeID": {
"path": "https://github.com/romeokienzler/learn-chaincode/tree/v1.0/finished"
},
"ctorMsg": {
"function": "init",
"args": [
"hi there"
]
},
"secureContext": "user_type1_0"
},
"id": 1
}
First of all deploy command should be changed to ( the value for path variable was changed):
{
"jsonrpc": "2.0",
"method": "deploy",
"params": {
"type": 1,
"chaincodeID": {
"path": "https://github.com/romeokienzler/learn-chaincode/finished"
},
"ctorMsg": {
"function": "init",
"args": ["hi there"]
},
"secureContext": "user_type1_0"
},
"id": 1
}
P.S. As #Mil4n correctly mentioned, IBM Bluemix still works with Fabric v0.5. Chaincode romeokienzler/learn-chaincode/finished should be adopted to this version.
For example shim.ChaincodeStubInterface is not available yet and should be replaced with *shim.ChaincodeStub.