Good morning,
I have the following set of virtual machines:
VM A
Generic Enablers Orion and Cygnus
IP: 10.10.0.10
VM B
MongoDB
IP: 10.10.0.17
Cygnus configuration is:
/usr/cygnus/conf/cygnus_instance_mongodb.conf
#####
#
# Configuration file for apache-flume
#
#####
# Copyright 2014 Telefonica Investigación y Desarrollo, S.A.U
#
# This file is part of fiware-connectors (FI-WARE project).
#
# cosmos-injector is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General
# Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any
# later version.
# cosmos-injector is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied
# warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
# details.
#
# You should have received a copy of the GNU Affero General Public License along with fiware-connectors. If not, see
# http://www.gnu.org/licenses/.
#
# For those usages not covered by the GNU Affero General Public License please contact with iot_support at tid dot es
# Who to run cygnus as. Note that you may need to use root if you want
# to run cygnus in a privileged port (<1024)
CYGNUS_USER=cygnus
# Where is the config folder
CONFIG_FOLDER=/usr/cygnus/conf
# Which is the config file
CONFIG_FILE=/usr/cygnus/conf/agent_mongodb.conf
# Name of the agent. The name of the agent is not trivial, since it is the base for the Flume parameters
# naming conventions, e.g. it appears in .sources.http-source.channels=...
AGENT_NAME=cygnusagent
# Name of the logfile located at /var/log/cygnus. It is important to put the extension '.log' in order to the log rotation works properly
LOGFILE_NAME=cygnus.log
# Administration port. Must be unique per instance
ADMIN_PORT=8081
# Polling interval (seconds) for the configuration reloading
POLLING_INTERVAL=30
/usr/cygnus/conf/agent_mongodb.conf
#####
#
# Copyright 2014 Telefónica Investigación y Desarrollo, S.A.U
#
# This file is part of fiware-connectors (FI-WARE project).
#
# fiware-connectors is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General
# Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any
# later version.
# fiware-connectors is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied
# warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
# details.
#
# You should have received a copy of the GNU Affero General Public License along with fiware-connectors. If not, see
# http://www.gnu.org/licenses/.
#
# For those usages not covered by the GNU Affero General Public License please contact with iot_support at tid dot es
#=============================================
# To be put in APACHE_FLUME_HOME/conf/agent.conf
#
# General configuration template explaining how to setup a sink of each of the available types (HDFS, CKAN, MySQL).
#=============================================
# The next tree fields set the sources, sinks and channels used by Cygnus. You could use different names than the
# ones suggested below, but in that case make sure you keep coherence in properties names along the configuration file.
# Regarding sinks, you can use multiple types at the same time; the only requirement is to provide a channel for each
# one of them (this example shows how to configure 3 sink types at the same time). Even, you can define more than one
# sink of the same type and sharing the channel in order to improve the performance (this is like having
# multi-threading).
cygnusagent.sources = http-source
cygnusagent.sinks = mongo-sink
cygnusagent.channels = mongo-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = mongo-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts gi
# Timestamp interceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# Destination extractor interceptor, do not change
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Matching table for the destination extractor interceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
# ============================================
# OrionMongoSink configuration
# channel name from where to read notification events
cygnusagent.sinks.mongo-sink.channel = mongo-channel
# sink class, must not be changed
cygnusagent.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.OrionMongoSink
# true if the grouping feature is enabled for this sink, false otherwise
cygnusagent.sinks.mongo-sink.enable_grouping = false
# the FQDN/IP address where the MySQL server runs (standalone case) or comma-separated list of FQDN/IP:port pairs where the MongoDB replica set members run
cygnusagent.sinks.mongo-sink.mongo_host = 10.10.0.17:27017
# a valid user in the MongoDB server
cygnusagent.sinks.mongo-sink.mongo_username =
# password for the user above
cygnusagent.sinks.mongo-sink.mongo_password =
# prefix for the MongoDB databases
cygnusagent.sinks.mongo-sink.db_prefix = hvds_
# prefix for the MongoDB collections
cygnusagent.sinks.mongo-sink.collection_prefix = hvds_
# true is collection names are based on a hash, false for human redable collections
cygnusagent.sinks.mongo-sink.should_hash = false
# specify if the sink will use a single collection for each service path, for each entity or for each attribute
cygnusagent.sinks.mongo-sink.data_model = collection-per-entity
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.mongo-sink.attr_persistence = column
#=============================================
# mongo-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mongo-channel.type = memory
# capacity of the channel
cygnusagent.channels.mongo-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mongo-channel.transactionCapacity = 100
When performing the following steps:
I subscribe sensor and data persistently want to save :
(curl http://10.10.0.10:1026/NGSI10/subscribeContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' -d #- | python -mjson.tool) <<EOF
{
"entities": [
{
"type": "Sensor",
"isPattern": "false",
"id": "sensor003"
}
],
"attributes": [
"potencia_max",
"potencia_min",
"coste",
"co2"
],
"reference": "http://localhost:5050/notify",
"duration": "P1M",
"notifyConditions": [
{
"type": "ONTIMEINTERVAL",
"condValues": [
"PT5S"
]
}
]
}
EOF
Then I make the creation or modification of such data:
(curl http://10.10.0.10:1026/NGSI10/updateContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' -d #- | python -mjson.tool) <<EOF
{
"contextElements": [
{
"type": "Sensor",
"isPattern": "false",
"id": "sensor003",
"attributes": [
{
"name":"potencia_max",
"type":"float",
"value":"1000"
},
{
"name":"potencia_min",
"type":"float",
"value":"200"
},
{
"name":"coste",
"type":"float",
"value":"0.24"
},
{
"name":"co2",
"type":"float",
"value":"12"
}
]
}
],
"updateAction": "APPEND"
}
EOF
The expected result is obtained, but when accessing the database of VM B, to see if they have created and saved the data, we see that it has not happened:
admin (empty)
local 0.078GB
localhost (empty)
If we go to the database of VM A we can see who has created the database:
admin (empty)
hvds_def_serv 0.078GB
hvds_qsg 0.078GB
local 0.078GB
orion 0.078GB
Would I could indicate how it can solve?
Thank you in advance for your help
EDIT 1
I subscribe the sensor005
(curl http://10.10.0.10:1026/NGSI10/subscribeContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' -d #- | python -mjson.tool) <<EOF
{
"entities": [{
"type": "Sensor",
"isPattern": "false",
"id": "sensor005"
}],
"attributes": [
"muestreo"
],
"reference": "http://localhost:5050/notify",
"duration": "P1M",
"notifyConditions": [{
"type": "ONCHANGE",
"condValues": [
"muestreo"
]
}],
"throttling": "PT1S"
}
EOF
Then I edit data:
(curl http://10.10.0.10:1026/NGSI10/updateContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' -d #- | python -mjson.tool) <<EOF
{
"contextElements": [
{
"type": "Sensor",
"isPattern": "false",
"id": "sensor005",
"attributes": [
{
"name":"magnitud",
"type":"string",
"value":"energia"
},
{
"name":"unidad",
"type":"string",
"value":"Kw"
},
{
"name":"tipo",
"type":"string",
"value":"electrico"
},
{
"name":"valido",
"type":"boolean",
"value":"true"
},
{
"name":"muestreo",
"type":"hora/kw",
"value": {
"tiempo": [
"10:00:31",
"10:00:32",
"10:00:33",
"10:00:34",
"10:00:35",
"10:00:36",
"10:00:37",
"10:00:38",
"10:00:39",
"10:00:40",
"10:00:41",
"10:00:42",
"10:00:43",
"10:00:44",
"10:00:45",
"10:00:46",
"10:00:47",
"10:00:48",
"10:00:49",
"10:00:50",
"10:00:51",
"10:00:52",
"10:00:53",
"10:00:54",
"10:00:55",
"10:00:56",
"10:00:57",
"10:00:58",
"10:00:59",
"10:01:60"
],
"kw": [
"200",
"201",
"200",
"200",
"195",
"192",
"190",
"189",
"195",
"200",
"205",
"210",
"207",
"205",
"209",
"212",
"215",
"220",
"225",
"230",
"250",
"255",
"245",
"242",
"243",
"240",
"220",
"210",
"200",
"200"
]
}
}
]
}
],
"updateAction": "APPEND"
}
EOF
These are the two logs generated:
/var/log/contextBroker/contextBroker.log
/var/log/cygnus/cygnus.log
EDIT 2
/var/log/cygnus/cygnus.log with DEBUG
Related
I am running kafka and influxDB on docker.
I have created a digital twin on ditto, that correctly updates when i send a message with mqtt.
I want the data to be sent from ditto to the influxDB but on influxDB once i create the bucket it shows no data whatsoever.
I have followed this guide:https://www.influxdata.com/blog/getting-started-apache-kafka-influxdb/
(i know this is for a python program but the steps should be the same, i just use the telegraf plugin for kafka consumer instead of the one used in the guide).
I have created the connection and the configuration file of telegraf but nothing happens on InfluxDB.
Here is the telegraf.conf
`
[[outputs.influxdb_v2]]
## The URLs of the InfluxDB cluster nodes.
##
## Multiple URLs can be specified for a single cluster, only ONE of the
## urls will be written to each interval.
## ex: urls = ["https://us-west-2-1.aws.cloud2.influxdata.com"]
urls = ["http://localhost:8086"]
## API token for authentication.
token = "$INFLUX_TOKEN"
## Organization is the name of the organization you wish to write to; must exist.
organization = "digital"
## Destination bucket to write into.
bucket = "arduino"
## The value of this tag will be used to determine the bucket. If this
## tag is not set the 'bucket' option is used as the default.
# bucket_tag = ""
## If true, the bucket tag will not be added to the metric.
# exclude_bucket_tag = false
## Timeout for HTTP messages.
# timeout = "5s"
## Additional HTTP headers
# http_headers = {"X-Special-Header" = "Special-Value"}
## HTTP Proxy override, if unset values the standard proxy environment
## variables are consulted to determine which proxy, if any, should be used.
# http_proxy = "http://corporate.proxy:3128"
## HTTP User-Agent
# user_agent = "telegraf"
## Content-Encoding for write request body, can be set to "gzip" to
## compress body or "identity" to apply no encoding.
# content_encoding = "gzip"
## Enable or disable uint support for writing uints influxdb 2.0.
# influx_uint_support = false
## Optional TLS Config for use on HTTP connections.
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
# Read metrics from Kafka topics
[[inputs.kafka_consumer]]
## Kafka brokers.
brokers = ["localhost:9092"]
## Topics to consume.
topics = ["arduino"]
## When set this tag will be added to all metrics with the topic as the value.
# topic_tag = ""
## Optional Client id
# client_id = "Telegraf"
## Set the minimal supported Kafka version. Setting this enables the use of new
## Kafka features and APIs. Must be 0.10.2.0 or greater.
## ex: version = "1.1.0"
# version = ""
## Optional TLS Config
# enable_tls = false
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## SASL authentication credentials. These settings should typically be used
## with TLS encryption enabled
# sasl_username = "kafka"
# sasl_password = "secret"
## Optional SASL:
## one of: OAUTHBEARER, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, GSSAPI
## (defaults to PLAIN)
# sasl_mechanism = ""
## used if sasl_mechanism is GSSAPI (experimental)
# sasl_gssapi_service_name = ""
# ## One of: KRB5_USER_AUTH and KRB5_KEYTAB_AUTH
# sasl_gssapi_auth_type = "KRB5_USER_AUTH"
# sasl_gssapi_kerberos_config_path = "/"
# sasl_gssapi_realm = "realm"
# sasl_gssapi_key_tab_path = ""
# sasl_gssapi_disable_pafxfast = false
## used if sasl_mechanism is OAUTHBEARER (experimental)
# sasl_access_token = ""
## SASL protocol version. When connecting to Azure EventHub set to 0.
# sasl_version = 1
# Disable Kafka metadata full fetch
# metadata_full = false
## Name of the consumer group.
# consumer_group = "telegraf_metrics_consumers"
## Compression codec represents the various compression codecs recognized by
## Kafka in messages.
## 0 : None
## 1 : Gzip
## 2 : Snappy
## 3 : LZ4
## 4 : ZSTD
# compression_codec = 0
## Initial offset position; one of "oldest" or "newest".
# offset = "oldest"
## Consumer group partition assignment strategy; one of "range", "roundrobin" or "sticky".
# balance_strategy = "range"
## Maximum length of a message to consume, in bytes (default 0/unlimited);
## larger messages are dropped
max_message_len = 1000000
## Maximum messages to read from the broker that have not been written by an
## output. For best throughput set based on the number of metrics within
## each message and the size of the output's metric_batch_size.
##
## For example, if each message from the queue contains 10 metrics and the
## output metric_batch_size is 1000, setting this to 100 will ensure that a
## full batch is collected and the write is triggered immediately without
## waiting until the next flush_interval.
# max_undelivered_messages = 1000
## Maximum amount of time the consumer should take to process messages. If
## the debug log prints messages from sarama about 'abandoning subscription
## to [topic] because consuming was taking too long', increase this value to
## longer than the time taken by the output plugin(s).
##
## Note that the effective timeout could be between 'max_processing_time' and
## '2 * max_processing_time'.
# max_processing_time = "100ms"
## The default number of message bytes to fetch from the broker in each
## request (default 1MB). This should be larger than the majority of
## your messages, or else the consumer will spend a lot of time
## negotiating sizes and not actually consuming. Similar to the JVM's
## `fetch.message.max.bytes`.
# consumer_fetch_default = "1MB"
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "json"
the kafka connection as it is on ditto explorer:
{
"id": "0ab4b527-617f-4f4f-8bac-4ffa4b5a8471",
"name": "Kafka 2.x",
"connectionType": "kafka",
"connectionStatus": "open",
"uri": "tcp://192.168.109.74:9092",
"sources": [
{
"addresses": [
"arduino"
],
"consumerCount": 1,
"qos": 1,
"authorizationContext": [
"nginx:ditto"
],
"enforcement": {
"input": "{{ header:device_id }}",
"filters": [
"{{ entity:id }}"
]
},
"acknowledgementRequests": {
"includes": []
},
"headerMapping": {},
"payloadMapping": [
"Ditto"
],
"replyTarget": {
"address": "theReplyTopic",
"headerMapping": {},
"expectedResponseTypes": [
"response",
"error",
"nack"
],
"enabled": true
}
}
],
"targets": [
{
"address": "topic/key",
"topics": [
"_/_/things/twin/events",
"_/_/things/live/messages"
],
"authorizationContext": [
"nginx:ditto"
],
"headerMapping": {}
}
],
"clientCount": 1,
"failoverEnabled": true,
"validateCertificates": true,
"processorPoolSize": 1,
"specificConfig": {
"saslMechanism": "plain",
"bootstrapServers": "localhost:9092"
},
"tags": []
}
the policy file for ditto:
{
"policyId": "my.test:policy1",
"entries": {
"owner": {
"subjects": {
"nginx:ditto": {
"type": "nginx basic auth user"
}
},
"resources": {
"thing:/": {
"grant": ["READ","WRITE"],
"revoke": []
},
"policy:/": {
"grant": ["READ","WRITE"],
"revoke": []
},
"message:/": {
"grant": ["READ","WRITE"],
"revoke": []
}
}
},
"observer": {
"subjects": {
"ditto:observer": {
"type": "observer user"
}
},
"resources": {
"thing:/features": {
"grant": ["READ"],
"revoke": []
},
"policy:/": {
"grant": ["READ"],
"revoke": []
},
"message:/": {
"grant": ["READ"],
"revoke": []
}
}
}
}
}
the configuration file of telegraf but nothing happens on InfluxDB
When Telegraf is reading data from Kafka is needs to transform that into time-series metrics that InfluxDB can digest. You have correctly selected the JSON parser, but there may be additional configuraiton required, or even the use of the more powerful json_v2 parser to be able to set the tags and fields based on the JSON data.
My suggestion is to use the [[outputs.file]] output to see if anything is even getting passed, probably nothing will show up. Then do the following:
determine what your JSON looks like in kafka
what you want that JSON to look like as time-series data in influxdb
use the json_v2 parser to set the apporporiate tags and fields.
In a single Deployment Manager template, how do I create a new folder and a new project underneath it? The problem is that the reference to the folder includes a name in the format folders/123456, but the project requires a parent field in the format {'type': 'folder', 'id': 123456}. Using a $(ref.new-folder.name) won't work for the ID field in the parent record for the new project.
It feels like I need to do string manipulation on the $(ref.new-folder.name) like this:
# DOES NOT WORK
# but if it did, I could extract the numeric id from 'folders/123456'
parent_id = '$(ref.new-folder.name)'.replace('folders/', '')
But, of course, that won't work.
Here is my (non-working) attempt:
# Template for new folder & new project
folder_resource = {
'name': 'new-folder',
'type': 'gcp-types/cloudresourcemanager-v2:folders',
'properties': {
'parent': 'organizations/99999',
'displayName': 'new-folder'
}
}
project_resource = {
'name': 'new-project',
'type': 'clouresourcemanager.v1.project',
'metadata': { 'dependsOn': ['new-folder'] },
'properties': {
'name': 'new-project',
'parent': {
'type': 'folder',
# HERE it is -- the problem!
'id': '$(ref.new-folder.name)'
}
}
}
return { 'resources': [folder_resource, project_resource] }
So, to reiterate, I'm getting hung-up on extracting the numeric folder id from the reference to the folder's name. The name is in the format folders/123456 but I just need the 123456 part to use in the parent field for the new project.
This question is specific to folder & project creation, but a more generalized question would be: is there a way to do string-manipulation on the value of references?
For creating and managing folders document [a] might be helpful and folder name must meet the following requirements:
The name may contain letters, digits, spaces, hyphens and underscores.
The folder's display name must start and end with a letter or digit.
The name must be 30 characters or less.
The name must be distinct from all other folders that share its parent.
To create a folder:
Folders can be created with an API request.
The request JSON:
request_json= '{
display_name: "[DISPLAY_NAME]"
}'
The Create Folder curl request:
curl -X POST -H "Content-Type: application/json" \
-H "Authorization: Bearer ${bearer_token}" \
-d "$request_json" \
https://cloudresourcemanager.googleapis.com/v2/folders?parent=[ORGANIZATION_NAME]
Where:
-[DISPLAY_NAME] is the new folder's display name, for example "My Awesome Folder."
-[ORGANIZATION_NAME] is the name of the organization under which you're creating the
folder, for example organizations/123.
The Create Folder response:
{
"name": "operations/fc.123456789",
"metadata": {
"#type": "type.googleapis.com/google.cloud.resourcemanager.v2.FolderOperation",
"displayName": "[DISPLAY_NAME]",
"operationType": "CREATE"
}
}
The Get Operation curl request:
curl -H "Authorization: Bearer ${bearer_token}" \
https://cloudresourcemanager.googleapis.com/v1/operations/fc.123456789
The Get Operation response:
{
"name": "operations/fc.123456789",
"metadata": {
"#type": "type.googleapis.com/google.cloud.resourcemanager.v2.FolderOperation",
"displayName": "[DISPLAY_NAME]",
"operationType": "CREATE"
},
"done": true,
"response": {
"#type": "type.googleapis.com/google.cloud.resourcemanager.v2.Folder",
"name": "folders/12345",
"parent": "organizations/123",
"displayName": "[DISPLAY_NAME]",
"lifecycleState": "ACTIVE",
"createTime": "2017-07-19T23:29:26.018Z",
"updateTime": "2017-07-19T23:29:26.046Z"
}
}
Configuring access to folders
SetsIamPolicy sets the access control policy on a folder, replacing any existing policy. The resource field should be the folder's resource name, for example, folders/1234.
request_json= '{
policy: {
version: "1",
bindings: [
{
role: "roles/resourcemanager.folderEditor",
members: [
"user:email1#example.com",
"user:email2#example.com",
]
}
]
}
}'
The curl request:
curl -X POST -H "Content-Type: application/json" \
-H "Authorization: Bearer ${bearer_token}" \
-d "$request_json" \
https://cloudresourcemanager.googleapis.com/v2/[FOLDER_NAME]:setIamPolicy
Where:
-[FOLDER_NAME] is the name of the folder whose IAM policy is being set, for example folders/123.
Creating a project in a folder
request_json= ‘{
name: “[DISPLAY_NAME]”, projectId: “[PROJECT_ID]”, parent: {id: [PARENT_ID], type: [PARENT_TYPE] }
}’
The curl request:
curl -X POST -H "Content-Type: application/json" \
-H "Authorization: Bearer ${bearer_token}" \
-d "$request_json" \
https://cloudresourcemanager.googleapis.com/v1/projects
Where:
-[PROJECT_ID] is id of the project being created, for e.g., my-awesome-proj-123.
-[DISPLAY_NAME] is the display name of the project being created.
-[PARENT_ID] is the id of the parent being created under, for e.g. 123
-[PARENT_TYPE] is the type of the parent, like “folder” or “Organization”
When we create a reference to a resource, we also create a dependency between resources, document [b] might be helpful for this.
[a]-https://cloud.google.com/resource-manager/docs/creating-managing-folders
[b]-https://cloud.google.com/deployment-manager/docs/configuration/use-references
I am able to build a Jenkins job with its parameters' default values by sending a POST call to
http://jenkins:8080/view/Orion_phase_2/job/test_remote_api_triggerring/buildWithParameters
and I can override the default parameters "product", "suites" and "markers by sending to this URL:
http://jenkins:8080/view/Orion_phase_2/job/test_remote_api_triggerring/buildWithParameters?product=ALL&suites=ALL&markers=ALL
But I saw examples were the parameters can be override by sending a JSON body with new values. I am trying to do that by sending the following json bodies. Neither of them works for me.
{
'product': 'ALL',
'suites': 'ALL',
'markers': 'ALL'
}
and
{
"parameter": [
{
"name": "product",
"value": "ALL"
},
{
"name": "suites",
"value": "ALL"
},
{
"name": "markers",
"value": "ALL"
}
]
}
What JSON to send if I want to override the values of parameters "product", "suites" & "markers"?
I'll leave the original question as is and elaborate here on the various API calls to trigger parameterized builds. These are the calls options that I used.
Additional documentation: https://wiki.jenkins.io/display/JENKINS/Remote+access+API
The job contains 3 parameters named: product, suites, markers
Send the parameters as URL query parameters to /buildWithParameters:
http://jenkins:8080/view/Orion_phase_2/job/test_remote_api_triggerring/buildWithParameters?product=ALL&suites=ALL&markers=ALL
Send the parameters as JSON data\payload to /build:
http://jenkins:8080/view/Orion_phase_2/job/test_remote_api_triggerring/build
The JSON data\payload is not sent as the call's json_body (which is what confused me), but rater in the data payload as:
json:'{
"parameter": [
{"name":"product", "value":"123"},
{"name":"suites", "value":"high"},
{"name":"markers", "value":"Hello"}
]
}'
And here are the CURL commands for each of the above calls:
curl -X POST -H "Jenkins-Crumb:2e11fc9...0ed4883a14a" http://jenkins:8080/view/Orion_phase_2/job/test_remote_api_triggerring/build --user "raameeil:228366f31...f655eb82058ad12d" --form json='{"parameter": [{"name":"product", "value":"123"}, {"name":"suites", "value":"high"}, {"name":"markers", "value":"Hello"}]}'
curl -X POST \
'http://jenkins:8080/view/Orion_phase_2/job/test_remote_api_triggerring/buildWithParameters?product=234&suites=333&markers=555' \
-H 'authorization: Basic c2hsb21pb...ODRlNjU1ZWI4MjAyOGFkMTJk' \
-H 'cache-control: no-cache' \
-H 'jenkins-crumb: 0bed4c7...9031c735a' \
-H 'postman-token: 0fb2ef51-...-...-...-6430e9263c3b'
What to send to Python's requests
In order to send the above calls in Python you will need to pass:
headers = jenkins-crumb
auth = tuple of your (user_name, user_auth_token)
data = dictionary type { 'json' : json string of {"parameter":[....]} }
curl -v POST http://user:token#host:port/job/my-job/build --data-urlencode json='{"parameter": [{"name":"xx", "value":"xxx"}]}
or use Python request:
import requests
import json
url = " http://user:token#host:port/job/my-job/build "
pyload = {"parameter": [
{"name":"xx", "value":"xxx"},
]}
data = {'json': json.dumps(pyload)}
rep = requests.post(url, data)
I have a kafka connect sink code for which below json is passed as curl command to register tasks.
Please let me know if anyone has any idea on how to get the task id's of my connect. For example in below example, we have defined max tasks is 3, so I need to know
the name of 3 tasks for logs i.e. I need to know which line of my log belongs to which task.
In below example, I know I have 3 tasks - TestCheck-1, TestCheck-2 and TestCheck-3 based on the kafka connect logs. I want to know how to get the task names so that I can print them in my kafka connect log lines.
{
"name": "TestCheck",
"config": {
"topics": "topic1",
"connector.class": "ApplicationSinkTask Class package",
"tasks.max": "3",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
"connector.url": "jdbc connection url",
"driver.name": "com.microsoft.sqlserver.jdbc.SQLServerDriver",
"username": "myusername",
"password": "mypassword",
"table.name": "test_table",
"database.name": "test",
}
}
When I register, I will get below details.
curl -X POST -H "Content-Type: application/json" --data #myjson.json http://service:8082/connectors
{"name":"TestCheck","config":{"topics":"topic1","connector.class":"ApplicationSinkTask Class package","tasks.max":"3","key.converter":"org.apache.kafka.connect.storage.StringConverter","value.converter":"org.apache.kafka.connect.storage.StringConverter","connector.url":"jdbc:sqlserver://datahubprod.database.windows.net:1433;","driver.name":"jdbc connection url","username":"myuser","password":"mypassword","table.name":"test_table","database.name":"test","name":"TestCheck"},"tasks":[{"connector":"TestCheck","task":0},{"connector":"TestCheck","task":1},{"connector":"TestCheck","task":2}],"type":null}
You can manage the connectors with the Kafka Connect Rest API. There's a whole heap of commands which you can find here
The example given in the above link shows you can retrieve all task for a given connector using the command
$ curl localhost:8083/connectors/local-file-sink/tasks
[
{
"id": {
"connector": "local-file-sink",
"task": 0
},
"config": {
"task.class": "org.apache.kafka.connect.file.FileStreamSinkTask",
"topics": "connect-test",
"file": "test.sink.txt"
}
}
]
You can use a language of your choice to send the curl command and import the json response into a variable/dictionary for further use, such as printing to a log. Here's a very simple example using python which will assign the whole output to a variable.
import requests
import json
connectors = 'http://localhost:8083/connectors'
p = requests.get(connectors)
data = p.json()
If you parse the data variable to a a dictionary, you can the access each element, i.e the task id
I hope this helps!
I have been using Druid for the past week and wanted to enable javascript for some postAggregations.
I think I followed the outlined steps and updated the common.runtime.properties file in ../con f/druid/_common/ to include druid.javascript.enabled=true. I then stopped the current processes and re-ran the Quickstart procedures, but it still says that JavaScript is disabled:
{
"error" : "Unknown exception",
"errorMessage" : "Instantiation of [simple type, class io.druid.query.aggregation.post.JavaScriptPostAggregator] value failed: JavaScript is disabled. (through reference chain: java.util.ArrayList[0])",
"errorClass" : "com.fasterxml.jackson.databind.JsonMappingException",
"host" : null
}
I am currently running it in the 'Quickstart' configuration - single local machine. Any pointers? Thanks!
JavaScript Query For druid Aggregation. Save the file as .body and hit the curl request.
This is a sample query for Average value.
curl -X POST "http://localhost:8082/druid/v2/?pretty" \ -H
'content-type: application/json' -d #query.body
{
"queryType":"groupBy",
"dataSource":"whirldata",
"granularity":"all",
"dimensions":[],
"aggregations":[{"name":"rows","type":"count","fieldName":"rows"},
{"name":"TargetDOS","type":"doubleSum","fieldName":"Target DOS"}],"postAggregations":[
{
"type": "javascript",
"name": "Target DOS Average",
"fieldNames": ["TargetDOS", "rows"],
"function": "function(TargetDOS, rows) { return Math.abs(TargetDOS) / rows; }"
}], "intervals":[ "2006-01-01T00:00:00.000Z/2020-01-01T00:00:00.000Z" ]}
The part you are missing is likely that the quickstart reads configs from conf-quickstart rather than conf. So try editing conf-quickstart/druid/_common/common.runtime.properties.