The python-ambariclient library has an api for retrieving the host_components:
ambari.services(service_name).components(component_name).host_components
How can I extract the name_node for an IBM Analytics Engine cluster?
I think I need to make the call:
GET https://xxxx.bi.services.us-south.bluemix.net:9443/api/v1/clusters/AnalyticsEngine/services/HDFS/components/NAMENODE?fields=host_components
Which retrieves the following information:
{
"href" : "https://xxxx.bi.services.us-south.bluemix.net:9443/api/v1/clusters/AnalyticsEngine/services/HDFS/components/NAMENODE?fields=host_components",
"ServiceComponentInfo" : {
"cluster_name" : "AnalyticsEngine",
"component_name" : "NAMENODE",
"service_name" : "HDFS"
},
"host_components" : [
{
"href" : "https://xxxx.bi.services.us-south.bluemix.net:9443/api/v1/clusters/AnalyticsEngine/hosts/xxxx.bi.services.us-south.bluemix.net/host_components/NAMENODE",
"HostRoles" : {
"cluster_name" : "AnalyticsEngine",
"component_name" : "NAMENODE",
"host_name" : "xxxx.bi.services.us-south.bluemix.net"
}
}
]
}
First, install the python-ambariclient library:
! pip install --quiet python-ambariclient
Next, you can use the following to retrieve the name node host name:
from future.standard_library import install_aliases
install_aliases()
from urllib.parse import urlparse
import json
vcap = json.load(open('./vcap.json'))
USER = vcap['cluster']['user']
PASSWORD = vcap['cluster']['password']
AMBARI_URL = vcap['cluster']['service_endpoints']['ambari_console']
CLUSTER_ID = vcap['cluster']['cluster_id']
url = urlparse(AMBARI_URL)
HOST = url.hostname
PORT = url.port
PROTOCOL = url.scheme
from ambariclient.client import Ambari
ambari = Ambari(HOST, port=PORT, username=USER, password=PASSWORD, protocol=PROTOCOL)
CLUSTER_NAME = ambari.clusters.next().cluster_name # gets first cluster - there will only be one
namenode_hc = ambari.clusters(CLUSTER_NAME).services('HDFS').components('NAMENODE').host_components
namenode_host_name = [hc.host_name for hc in namenode_hc if hc.host_name][0]
print(namenode_host_name)
I have created a library to extract this information. Install with:
pip install --quiet --upgrade git+https://github.com/snowch/ibm-analytics-engine-python#master
Then run:
from ibm_analytics_engine import AmbariOperations
ambari_ops = AmbariOperations(vcap_filename='./vcap.json')
ambari_ops.get_namenode_hostname()
Related
I want to access my postgres cluster within my kubernetes within azure cloud with a client (e.g. pgadmin) to search manuelly through data.
At the moment my complete cluster only has 1 ingress that is pointing to a self written api gateway.
I found a few ideas online and tried to add a load balancer in kubernetesd without success.
My postgress cluster in terraform:
resource "helm_release" "postgres-cluster" {
name = "postgres-cluster"
repository = "https://charts.bitnami.com/bitnami"
chart = "postgresql-ha"
namespace = var.kube_namespace
set {
name = "global.postgresql.username"
value = var.postgresql_username
}
set {
name = "global.postgresql.password"
value = var.postgresql_password
}
}
Results in a running cluster:
Now my try to add a load balancer:
resource "kubernetes_manifest" "postgresql-loadbalancer" {
manifest = {
"apiVersion" = "v1"
"kind" = "Service"
"metadata" = {
"name" = "postgres-db-lb"
"namespace" = "${var.kube_namespace}"
}
"spec" = {
"selector" = {
"app.kubernetes.io/name" = "postgresql-ha"
}
"type" = "LoadBalancer"
"ports" = [{
"port" = "5432"
"targetPort" = "5432"
}]
}
}
}
Will result in:
But still no success if I try to connect to the external IP and Port:
Found the answer - it was an internal Firewall I was never thinking of. The code is absolutly correct, a loadbalancer do work here.
I am deploying a AWS (ap-south-1) rancher setup with terraform version 1.1.9 :
Getting the below error while terraform apply :
Used :
Rancher version : Rocky-8.5-rancher-2.6.3
kubernetes_version : v1.21.7-rancher1-1
kubernetes : 2.11.0
helm : 2.5.1
rancher2 : 1.24.0
rancher/rke : 1.3.0
cert-manager : 1.5.0
╷
│ Error: Bad response statusCode [422]. Status [422 Unprocessable Entity]. Body: [baseType=error, code=InvalidBodyContent, message=cluster [c-xxgkz] status version is not available yet. Cannot validate kube version for template [system-library-rancher-monitoring-0.3.2]] from [https://ec2-13-232-176-25.ap-south-1.compute.amazonaws.com/v3/clusters/c-xxgkz?action=enableMonitoring]
│
│ with module.rke_custom_cluster.rancher2_cluster.rancher2-custom-cluster,
│ on .terraform/modules/rke_custom_cluster/rancher2_custom_cluster.tf line 20, in resource "rancher2_cluster" "rancher2-custom-cluster":
│ 20: resource "rancher2_cluster" "rancher2-custom-cluster" {
│
╵
Help me to resolve the error :
cluster monitoring version tried :
0.1.0,
0.1.4,
0.3.1,
0.3.2
Code snippet :
resource "rancher2_cluster" "rancher2-custom-cluster" {
name = var.rancher2_custom_cluster_name
cluster_template_id = var.rke_template_id
cluster_template_revision_id = var.rke_template_revisions_id
enable_cluster_monitoring = var.enable_cluster_monitoring
cluster_monitoring_input {
answers = {
"exporter-kubelets.https" = var.exporter_kubelets_https
"exporter-node.enabled" = var.exporter_node_enabled
"exporter-node.ports.metrics.port" = var.exporter_node_ports_metrics_port
"exporter-node.resources.limits.cpu" = var.exporter_node_resources_limits_cpu
"exporter-node.resources.limits.memory" = var.exporter_node_resources_limits_memory
"grafana.persistence.enabled" = var.grafana_persistence_enabled
"grafana.persistence.size" = var.grafana_persistence_size
"grafana.persistence.storageClass" = var.grafana_persistence_storageClass
"operator.resources.limits.memory" = var.operator_resources_limits_memory
"prometheus.persistence.enabled" = var.prometheus_persistence_enabled
"prometheus.persistence.size" = var.prometheus_persistence_size
"prometheus.persistence.storageClass" = var.prometheus_persistence_storageClass
"prometheus.persistent.useReleaseName" = var.prometheus_persistent_useReleaseName
"prometheus.resources.core.limits.cpu" = var.prometheus_resources_core_limits_cpu,
"prometheus.resources.core.limits.memory" = var.prometheus_resources_core_limits_memory
"prometheus.resources.core.requests.cpu" = var.prometheus_resources_core_requests_cpu
"prometheus.resources.core.requests.memory" = var.prometheus_resources_core_requests_memory
"prometheus.retention" = var.prometheus_retention
"grafana.nodeSelectors[0]" = var.node_selector
"operator.nodeSelectors[0]" = var.node_selector
"prometheus.nodeSelectors[0]" = var.node_selector
"exporter-kube-state.nodeSelectors[0]" = var.node_selector
}
version = var.cluster_monitoring_version
}
#depends_on = [ null_resource.rke_custom_cluster_dependency_getter ]
depends_on = [ null_resource.wait_for_rancher2 ]
}
Note Cluster monitoring version 0.2.0 or above, can't be enabled until cluster is fully deployed as kubeVersion requirement has been introduced to helm chart
By passing version as null in the mentioned code, Error passed and created setup.
}
version = ""
}
We can install monitoring inside the Rancher API.
Hi I’m importing a resource but it’s failing. I’m not sure what the issue is. Can someone point me how to fix this error.
I tried by setting sslmode = "require", got the same error.
my ssl is on in database and force.ssl is off
Terraform v0.12.20
provider.aws v2.58.0
provider.postgresql v1.5.0
Your version of Terraform is out of date! The latest version
My module:
locals.tf:
pgauth_dbs = var.env == "prod" ? var.prod_dbs : var.stage_dbs
variables.tf
variable "stage_dbs" {
type = list(string)
default = ["host_configs", "staging", "staging_preview"]
}
Provider
provider "postgresql" {
version = ">1.4.0"
alias = "pg1"
host = aws_db_instance.name.address
port = aws_db_instance.name.port
username = var.username
password = var.master_password
expected_version = aws_db_instance.name.engine_version
sslmode = "disable"
connect_timeout = 15
}
module:
resource "postgresql_database" "pgauth_dbs" {
provider = postgresql.pg1
for_each = toset(local.pgauth_dbs)
name = each.value
owner = "postgres"
}
Root-Module:
module rds {
source = ../../../../tf_module_rds
username = "postgres"
master_password = data.aws_kms_secrets.secrets_password.plaintext["password"]
engine_version = "11.5"
instance_class = "db.m5.xlarge"
allocated_storage = "300"
storage_type = "gp2"
}
terraform import module.rds.postgresql_database.name_dbs[“host_configs”] host_configs
module.rds.postgresql_database.name_dbs[“host_configs”]: Importing from ID “host_configs”…
module.rds.postgresql_database.name_dbs[“host_configs”]: Import prepared!
Prepared postgresql_database for import
module.rds.postgresql_database.name_dbs[\“host_configs”\]: Refreshing state… [id=host_configs]
Error: could not start transaction: pq: no PostgreSQL user name specified in startup packet
Provider should point to instance's username and not variable
provider "postgresql" {
version = ">1.4.0"
alias = "pg1"
host = aws_db_instance.name.address
port = aws_db_instance.name.port
username = aws_db_instance.name.username
password = aws_db_instance.name.password
database = aws_db_instance.name.name
expected_version = aws_db_instance.name.engine_version
sslmode = "disable"
connect_timeout = 15
}
I added database.runMigration: true to my build.gradle file but I'm getting this error when running deployNodes. What's causing this?
[ERROR] 14:05:21+0200 [main] subcommands.ValidateConfigurationCli.logConfigurationErrors$node - Error(s) while parsing node configuration:
- for path: "database.runMigration": Unknown property 'runMigration'
Here's my build.gradle's deployNode task
task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) {
directory "./build/nodes"
ext.drivers = ['.jdbc_driver']
ext.extraConfig = [
'dataSourceProperties.dataSourceClassName' : "org.postgresql.ds.PGSimpleDataSource",
'dataSourceProperties.dataSource.user' : "corda",
'dataSourceProperties.dataSource.password' : "corda1234",
'database.transactionIsolationLevel' : 'READ_COMMITTED',
'database.runMigration' : "true"
]
nodeDefaults {
projectCordapp {
deploy = false
}
cordapp project(':cordapp-contracts-states')
cordapp project(':cordapp')
}
node {
name "O=HUS,L=Helsinki,C=FI"
p2pPort 10008
rpcSettings {
address "localhost:10009"
adminAddress "localhost:10049"
}
webPort 10017
rpcUsers = [[ user: "user1", "password": "test", "permissions": ["ALL"]]]
extraConfig = ext.extraConfig + [
'dataSourceProperties.dataSource.url' :
"jdbc:postgresql://localhost:5432/hus_db?currentSchema=corda_schema"
]
drivers = ext.drivers
}
}
The database.runMigration is a Corda Enterprise property only.
To control database migration in Corda Open Source use initialiseSchema.
initialiseSchema
Boolean which indicates whether to update the database schema at startup (or create the schema when node starts for the first time). If set to false on startup, the node will validate if it’s running against a compatible database schema.
Default: true
You may refer to the below link to look out for other database properties which you can set.
https://docs.corda.net/corda-configuration-file.html
I am trying to use cygnus with Mongo DB, but no data have been persisted in the data base.
Here is the notification got in cygnus:
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Starting transaction (1437482681-118-0000000000)
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Received data ({ "subscriptionId" : "55a73819d0c457bb20b1d467", "originator" : "localhost", "contextResponses" : [ { "contextElement" : { "type" : "enocean", "isPattern" : "false", "id" : "enocean:myButtonA", "attributes" : [ { "name" : "ButtonValue", "type" : "", "value" : "ON", "metadatas" : [ { "name" : "TimeInstant", "type" : "ISO8601", "value" : "2015-07-20T21:29:56.509293Z" } ] } ] }, "statusCode" : { "code" : "200", "reasonPhrase" : "OK" } } ]})
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Event put in the channel (id=1454120446, ttl=10)
Here is my agent configuration:
cygnusagent.sources = http-source
cygnusagent.sinks = OrionMongoSink
cygnusagent.channels = mongo-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = mongo-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts gi
# TimestampInterceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# GroupinInterceptor, do not change
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /home/egm_demo/usr/fiware-cygnus/conf/grouping_rules.conf
# ============================================
# OrionMongoSink configuration
# sink class, must not be changed
cygnusagent.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.OrionMongoSink
# channel name from where to read notification events
cygnusagent.sinks.mongo-sink.channel = mongo-channel
# FQDN/IP:port where the MongoDB server runs (standalone case) or comma-separated list of FQDN/IP:port pairs where the MongoDB replica set members run
cygnusagent.sinks.mongo-sink.mongo_hosts = 127.0.0.1:27017
# a valid user in the MongoDB server (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_username =
# password for the user above (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_password =
# prefix for the MongoDB databases
#cygnusagent.sinks.mongo-sink.db_prefix = kura
# prefix pro the MongoDB collections
#cygnusagent.sinks.mongo-sink.collection_prefix = button
# true is collection names are based on a hash, false for human redable collections
cygnusagent.sinks.mongo-sink.should_hash = false
# ============================================
# mongo-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mongo-channel.type = memory
# capacity of the channel
cygnusagent.channels.mongo-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mongo-channel.transactionCapacity = 100
Here is my rule :
{
"grouping_rules": [
{
"id": 1,
"fields": [
"button"
],
"regex": ".*",
"destination": "kura",
"fiware_service_path": "/kuraspath"
}
]
}
Any ideas of what I have missed? Thanks in advance for your help!
This configuration parameter is wrong:
cygnusagent.sinks = OrionMongoSink
According to your configuration, it must be mongo-sink (I mean, you are configuring a Mongo sink named mongo-sink when you configure lines such as cygnusagent.sinks.mongo-sink.type).
In addition, I would recommend you to not using the grouping rules feature; it is an advanced feature about sending the data to a collection different than the default one, and in a first stage I would play with the default behaviour. Thus, my recommendation is to leave the path to the file in cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file, but comment all the JSON within it :)