Could not to inspect the dedicated host provisioning process - ibm-cloud

When I used slcli(softlayer-python command) to create a dedicated host, the command return the order id. And I check the order's status was 'APPROVED'. But I can not get the host in the result of 'SoftLayer_Account/getDedicatedHosts'.
So I check the billing item and it is 'dedicated_virtual_hosts' rightly. Did SoftLayer API support another approach to inspect the dedicated host provisioned? Or did I do something wrong?

Yes, the dedicated host should be listed when calling to SoftLayer_Account::getDedicatedHosts method, or when using the "slcli dedicatedhost list" command. I suggest to check your permissions and device access, verify that "View Virtual Dedicated Host Details" is checked.
Below are some slcli commands I executed to order and list dedicated hosts.
To order a dedicated host:
slcli dedicatedhost create -H slahostname -D example.com -d mex01 -f 56_CORES_X_242_RAM_X_1_4_TB
To list dedicated hosts:
slcli dedicatedhost list
:.......:...................:..........:..............:................:............:............:
: id : name : cpuCount : diskCapacity : memoryCapacity : datacenter : guestCount :
:.......:...................:..........:..............:................:............:............:
: 11111 : slahostname : 56 : 1200 : 242 : mex01 : - :
:.......:...................:..........:..............:................:............:............:
Below an example about how to see the details:
slcli dedicatedhost detail 11111
:.................:...........................:
: name : value :
:.................:...........................:
: id : 11111 :
: name : slahostname :
: cpu count : 56 :
: memory capacity : 242 :
: disk capacity : 1200 :
: create date : 2018-02-01T09:53:46-04:00 :
: modify date : :
: router id : 333333 :
: router hostname : bcr01a.mex01 :
: owner : owner001 :
: guest count : 0 :
: datacenter : mex01 :
:.................:...........................:
Using RestFul the response when calling to SoftLayer_Account::getDedicatedHosts should be something like below:
GET:
https://[userName]:[apiKey]#api.softlayer.com/rest/v3/SoftLayer_Account/getDedicatedHosts
RESPONSE:
{
"cpuCount": 56,
"createDate": "2018-02-01T09:53:46-04:00",
"diskCapacity": 1200,
"id": 11111,
"memoryCapacity": 242,
"modifyDate": null,
"name": "slahostname"
}
Also you can use SoftLayer_Virtual_DedicatedHost::getObject method:
GET:
https://[userName]:[apiKey]#api.softlayer.com/rest/v3/SoftLayer_Virtual_DedicatedHost/11111/getObject

Related

Service unavailable error while using MongoDB, ElasticSearch and transporter

I am trying to use the transporter plugin to create a pipeline to sync a MongoDB database and ElasticSearch. I am using a Linux virtual machine (ubuntu) for this.
I have created a MongoDB collection my_application with the following data in it:
db.users.find().pretty();
{
"_id" : ObjectId("6008153cf979ac0f18681765"),
"firstName" : "Sammy",
"lastName" : "Shark"
}
{
"_id" : ObjectId("60081544f979ac0f18681766"),
"firstName" : "Gilly",
"lastName" : "Glowfish"
}
I configured ElasticSearch and the transporter pipeline and now exported MongoDB_URI and Elastic_URI.
I then ran my transporter pipeline.js to obtain this:
INFO[0005] metrics source records: 2 path=source ts=1611154492641006368
INFO[0005] metrics source/sink records: 2 path="source/sink" ts=1611154492641013556
I then try to view my ElasticSearch but get this error:
curl $ELASTICSEARCH_URI/_search?pretty=true
{
"error" : {
"root_cause" : [
{
"type" : "cluster_block_exception",
"reason" : "blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];"
}
],
"type" : "cluster_block_exception",
"reason" : "blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];"
},
"status" : 503
}
Here is my elasticsearch.yml:
# Use a descriptive name for the node:
node.name: node-1
path.data: /var/lib/elasticsearch
# Path to log files:
path.logs: /var/log/elasticsearch
# Set the bind address to a specific IP (IPv4 or IPv6):
network.host: 0.0.0.0
# Set a custom port for HTTP:
http.port: 9200
# Bootstrap the cluster using an initial set of master-eligible nodes:
cluster.initial_master_nodes: ["node-1", "node-2"]
Here is my elasticsearch node:
{
"name" : "node-1",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "_na_",
"version" : {
"number" : "7.7.1",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "ad56dce891c901a492bb1ee393f12dfff473a423",
"build_date" : "2020-05-28T16:30:01.040088Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
I have tried deleting indices and restarting the server but the error repeats. Would like to know the solution to this. I am using elasticsearch 7.10

SpringBoot MongoDB: how to find the CLOSEST POSITIVE DIFF document?

Considering a collection with the following documents:
[{
"_id" : ObjectId("5f984d04093a1a0be1237db2"),
"postalCode" : "",
"latitude" : -33.4939994812012,
"longitude" : 143.210403442383,
"accuracyRadius" : 1000,
"network" : "1.0.0.0/24",
"geoNameID" : 2077456,
"registeredCountryGeoNameID" : 2077456,
"anonymousProxy" : false,
"satelliteProvider" : false,
"_class" : "project.domain.maxmind.block.GeoLite2CityBlockIPv4"
},
{
"_id" : ObjectId("5f984d04093a1a0be1237db3"),
"postalCode" : "",
"latitude" : 34.7724990844727,
"longitude" : 113.726600646973,
"accuracyRadius" : 50,
"network" : "1.0.1.0/24",
"geoNameID" : 1814991,
"registeredCountryGeoNameID" : 1814991,
"anonymousProxy" : false,
"satelliteProvider" : false,
"_class" : "project.domain.maxmind.block.GeoLite2CityBlockIPv4"
},
{
"_id" : ObjectId("5f984d04093a1a0be1237db4"),
"postalCode" : "",
"latitude" : 34.7724990844727,
"longitude" : 113.726600646973,
"accuracyRadius" : 50,
"network" : "1.0.2.0/23",
"geoNameID" : 1814991,
"registeredCountryGeoNameID" : 1814991,
"anonymousProxy" : false,
"satelliteProvider" : false,
"_class" : "project.domain.maxmind.block.GeoLite2CityBlockIPv4"
}]
How can I write an efficient query for the closest network, starting from an IP address (string) as the input?
E.g. considering the network 1.0.1.0/24
and the IPv4 1.0.1.200
I should convert the IPv4 to its corresponding integer value
1.0.1.200 --> 16777672
and do the same for each network
network 1.0.0.0/24 --> 16777216
host min 1.0.0.1 --> 16777217
host max 1.0.0.254 --> 16777470
network 1.0.1.0/24 --> 16777472
host min 1.0.1.1 --> 16777473
host max 1.0.1.254 --> 16777726
network 1.0.2.0/23 --> 16777728
host min 1.0.2.1 --> 16777729
host max 1.0.3.254 --> 16778238
Then I would need to do a diff among the IP and the network to find the closest positive one:
16777672 - 16777216 = 456
16777672 - 16777472 = 200
16777672 - 16777728 = -56
How could I write such a query, considering that both the IPs and networks are Strings?
I could do just the conversion of the IP to its integer value, before passing it to the #Repository, but then I would need to convert the network addresses inside the #Query.
#Repository
public interface GeoLite2CityBlockIPv4Repository extends ReactiveMongoRepository<GeoLite2CityBlockIPv4, BigInteger>, ReactiveQuerydslPredicateExecutor<GeoLite2CityBlockIPv4> {
Mono<GeoLite2CityBlockIPv4> findOneByNetwork(String network);
#Query("{?}")
Mono<GeoLite2CityBlockIPv4> findOneByIP(Integer ip);
}

Find out user who created database/collection in MongoDB

I have so many applications on my server who are using mongo db. I want to find out the user name which is being used to create specific db/collection.
I see some application is malfunctioning and keeps on creating dbs dynamically. I want to find out the application through the user which is being used.
What i have done so far is, i found out the connection information from the mongodb logs by grepping that database and then ran this query,
db.currentOp(true).inprog.forEach(function(o){if(o.connectionId == 502925 ) printjson(o)});
And this is the result i am getting,
{
"host" : "abcd-server:27017",
"desc" : "conn502925",
"connectionId" : 502925,
"client" : "127.0.0.1:39266",
"clientMetadata" : {
"driver" : {
"name" : "mongo-java-driver",
"version" : "3.6.4"
},
"os" : {
"type" : "Linux",
"name" : "Linux",
"architecture" : "amd64",
"version" : "3.10.0-862.14.4.el7.x86_64"
},
"platform" : "Java/AdoptOpenJDK/1.8.0_212-b03"
},
"active" : false,
"currentOpTime" : "2019-07-02T07:31:39.518-0500"
}
Please let me know if there is any way to find out the user.

Error when try to connect to Replica Set in Mongo

When I try to connect to mongo replica set in AWS I get this error:
slavenode:27017: [Errno -2] Name or service not
known,ip-XXX-XX-XX-XX:27017: [Errno -2] Name or service not known
(where XXX-XX.. corresponds to my actual ip address)
The code to connect is shown below:
client = MongoClient("mongodb://Master-PublicIP:27017,Slave-PublicIP:27017/myFirstDB?replicaSet=rs0")
db = client.myFirstDB
try:
db.command("serverStatus")
except Exception as e:
print(e)
else:
print("You are connected!")
client.close()
(where in Master-PublicIP and Slave-PublicIP I have the actual IPv4 Public IP's from AWS console)
I have already a replica set and the configuration is:
rs0:PRIMARY> rs.conf()
{
"_id" : "rs0",
"version" : 2,
"members" : [
{
"_id" : 0,
"host" : "ip-XXX-XX-XX-XXX:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 1,
"host" : "SlaveNode:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatTimeoutSecs" : 10,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
}
}
}
I have create the /data/db in PRIMARY and the /data/db1 in SECONDARY and I have give the proper ownership with sudo chmod -R 755 /data/db
My MongoDB version is 3.0.15. Is anyone know what is going wrong?
Thanks in advance.
Have you tried removing the myFirstDB from within the MongoClient()
MongoClient("mongodb://Master-PublicIP:27017,Slave-PublicIP:27017/?replicaSet=rs0")
Because your next line then specifies which db you want to use
db = client.myFirstDB
Or I think you can specify the db by putting a dot after the closing brace on the MongoClient()
MongoClient("mongodb://Master-PublicIP:27017,Slave-PublicIP:27017/?replicaSet=rs0").myFirstDB
I manage to solve the problem. As #N3i1 suggests in commnets, I use the Public DNS (IPv4). There was an issue with the hosts that I had declare in /etc/hosts.
In this file I had define the ips of master/ slaves with some names. For some reason this didn't work. I delete them and then I reconfigure the replica set configuration.
In PRIMARY in mongo shell I did:
cfg = {"_id" : "rs0", "members" : [{"_id" : 0,"host" : "Public DNS (IPv4):27017"},{"_id" : 1,"host" : "Public DNS (IPv4):27017"}]}
rs.reconfig(cfg,{force: true});
Then I connect in the replica set with python with:
MongoClient("mongodb://Public DNS (IPv4):27017,Public DNS (IPv4):27017/?replicaSet=rs0")
Of course change the Public DNS (IPv4) adresses with yours.

mongodb replicaset host name change error

I have a mongodb replicaset on ubuntu.. In replica set, hosts are defined as localhost. You can see ;
{
"_id" : "myrep",
"version" : 4,
"members" : [
{
"_id" : 0,
"host" : "localhost:27017"
},
{
"_id" : 2,
"host" : "localhost:27018"
},
{
"_id" : 1,
"host" : "localhost:27019",
"priority" : 0
}
]
}
I want to change host adresses with real ip of server. But when i run rs.reconfig, I get error :
{
"assertion" : "hosts cannot switch between localhost and hostname",
"assertionCode" : 13645,
"errmsg" : "db assertion failure",
"ok" : 0
}
How can i solve it ?
Thank you.
There is a cleaner way to do this:
use local
cfg = db.system.replset.findOne({_id:"replicaSetName"})
cfg.members[0].host="newHost:27017"
db.system.replset.update({_id:"replicaSetName"},cfg)
then restart mongo
The only way I found to change host names is recreating replica set.. To make it right db directories need to be cleaned.. Then starting all servers with replication mode after that creating new repset with new host names fixed it.