Cygnus doesn't write on CartoDB - fiware-orion

I'm trying to integrate Cygnus to CartoDB but when the Cygnus receives an Orion notify it doesn't store the information on CartoDB.
Following the log trace
time=2016-12-19T14:37:13.657Z | lvl=DEBUG | corr=68c76dfc-c5f8-11e6-9346-fa163e00324f | trans=e2827b21-972d-4692-b25a-e1b252961491 | srv=default | subsrv=/ | comp=cygnus-ngsi | op=intercept | msg=com.telefonica.iot.cygnus.interceptors.NGSIGroupingInterceptor[127] : [gi] Event put in the channel, id=1724500127
time=2016-12-19T14:37:13.658Z | lvl=DEBUG | corr=68c76dfc-c5f8-11e6-9346-fa163e00324f | trans=e2827b21-972d-4692-b25a-e1b252961491 | srv=default | subsrv=/ | comp=cygnus-ngsi | op=debug | msg=org.mortbay.log.Slf4jLog[40] : RESPONSE /notify 200
time=2016-12-19T14:37:13.659Z | lvl=DEBUG | corr=68c76dfc-c5f8-11e6-9346-fa163e00324f | trans=e2827b21-972d-4692-b25a-e1b252961491 | srv=default | subsrv=/ | comp=cygnus-ngsi | op=debug | msg=org.mortbay.log.Slf4jLog[40] : EOF
time=2016-12-19T14:37:15.165Z | lvl=DEBUG | corr=68c76dfc-c5f8-11e6-9346-fa163e00324f | trans=e2827b21-972d-4692-b25a-e1b252961491 | srv=default | subsrv=/ | comp=cygnus-ngsi | op=processNewBatches | msg=com.telefonica.iot.cygnus.sinks.NGSISink[509] : Batch completed, persisting it
time=2016-12-19T14:37:15.166Z | lvl=DEBUG | corr=68c76dfc-c5f8-11e6-9346-fa163e00324f | trans=e2827b21-972d-4692-b25a-e1b252961491 | srv=default | subsrv=/ | comp=cygnus-ngsi | op=persistBatch | msg=com.telefonica.iot.cygnus.sinks.NGSICartoDBSink[333] : [cartodb-sink] Processing sub-batch regarding the default_/_waste1_wastectr destination
time=2016-12-19T14:37:15.166Z | lvl=DEBUG | corr=68c76dfc-c5f8-11e6-9346-fa163e00324f | trans=e2827b21-972d-4692-b25a-e1b252961491 | srv=default | subsrv=/ | comp=cygnus-ngsi | op=aggregate | msg=com.telefonica.iot.cygnus.sinks.NGSICartoDBSink$CartoDBAggregator[508] : [cartodb-sink] Processing context element (id=waste1, type=wastectr)
time=2016-12-19T14:37:15.166Z | lvl=DEBUG | corr=68c76dfc-c5f8-11e6-9346-fa163e00324f | trans=e2827b21-972d-4692-b25a-e1b252961491 | srv=default | subsrv=/ | comp=cygnus-ngsi | op=aggregate | msg=com.telefonica.iot.cygnus.sinks.NGSICartoDBSink$CartoDBAggregator[530] : [cartodb-sink] Processing context attribute (name=category, type=StructuredValue)
time=2016-12-19T14:37:15.167Z | lvl=DEBUG | corr=68c76dfc-c5f8-11e6-9346-fa163e00324f | trans=e2827b21-972d-4692-b25a-e1b252961491 | srv=default | subsrv=/ | comp=cygnus-ngsi | op=aggregate | msg=com.telefonica.iot.cygnus.sinks.NGSICartoDBSink$CartoDBAggregator[530] : [cartodb-sink] Processing context attribute (name=status, type=Text)
time=2016-12-19T14:37:15.171Z | lvl=DEBUG | corr=68c76dfc-c5f8-11e6-9346-fa163e00324f | trans=e2827b21-972d-4692-b25a-e1b252961491 | srv=default | subsrv=/ | comp=cygnus-ngsi | op=processNewBatches | msg=com.telefonica.iot.cygnus.sinks.NGSISink[523] : [java.util.ArrayList.rangeCheck(Unknown Source), java.util.ArrayList.get(Unknown Source), com.telefonica.iot.cygnus.sinks.NGSICartoDBSink$CartoDBAggregator.getRows(NGSICartoDBSink.java:410), com.telefonica.iot.cygnus.sinks.NGSICartoDBSink.persistRawAggregation(NGSICartoDBSink.java:552), com.telefonica.iot.cygnus.sinks.NGSICartoDBSink.persistBatch(NGSICartoDBSink.java:362), com.telefonica.iot.cygnus.sinks.NGSISink.processNewBatches(NGSISink.java:510), com.telefonica.iot.cygnus.sinks.NGSISink.process(NGSISink.java:327), org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68), org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147), java.lang.Thread.run(Unknown Source)]
time=2016-12-19T14:37:15.171Z | lvl=WARN | corr=68c76dfc-c5f8-11e6-9346-fa163e00324f | trans=e2827b21-972d-4692-b25a-e1b252961491 | srv=default | subsrv=/ | comp=cygnus-ngsi | op=processNewBatches | msg=com.telefonica.iot.cygnus.sinks.NGSISink[541] : Index: 0, Size: 0
time=2016-12-19T14:37:16.090Z | lvl=DEBUG | corr= | trans= | srv= | subsrv= | comp=cygnus-ngsi | op=run | msg=org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable[126] : Checking file:/usr/cygnus/conf/agent_ngsi_1.conf for changes
The configuration of agent_ngsi_1.conf is
The next tree fields set the sources, sinks and channels used by Cygnus
cygnus-ngsi.sinks = cartodb-sink
cygnus-ngsi.channels = cartodb-channel
Source configuration
# channel name where to write the notification events
#cygnus-ngsi.sources.http-source.channels = hdfs-channel mysql-channel ckan- channel mongo-channel sth-channel kafka-channel dynamodb-channel postgresql- channel
cygnus-ngsi.sources.http-source.channels = cartodb-channel
# source class, must not be changed
cygnus-ngsi.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnus-ngsi.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnus-ngsi.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.NGSIRestHandler
# URL target
cygnus-ngsi.sources.http-source.handler.notification_target = /notify
# default service (service semantic depends on the persistence sink)
cygnus-ngsi.sources.http-source.handler.default_service = default
# default service path (service path semantic depends on the persistence sink)
cygnus-ngsi.sources.http-source.handler.default_service_path = /
# source interceptors, do not change
cygnus-ngsi.sources.http-source.interceptors = ts gi
# TimestampInterceptor, do not change
cygnus-ngsi.sources.http-source.interceptors.ts.type = timestamp
# GroupingInterceptor, do not change
cygnus-ngsi.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.NGSIGroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# see the doc/design/interceptors document for more details
cygnus-ngsi.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
NGSICartoDBSink configuration
# sink class, must not be changed
cygnus-ngsi.sinks.cartodb-sink.type = com.telefonica.iot.cygnus.sinks.NGSICartoDBSink
# channel name from where to read notification events
cygnus-ngsi.sinks.cartodb-sink.channel = cartodb-channel
# true if the grouping feature is enabled for this sink, false otherwise
cygnus-ngsi.sinks.cartodb-sink.enable_grouping = false
# true if name mappings are enabled for this sink, false otherwise
cygnus-ngsi.sinks.cartodb-sink.enable_name_mappings = false
# true if lower case is wanted to forced in all the element names, false otherwise
cygnus-ngsi.sinks.cartodb-sink.enable_lowercase = false
# select the data_model: dm-by-service-path or dm-by-entity
cygnus-ngsi.sinks.cartodb-sink.data_model = dm-by-entity
# absolute path to the CartoDB file containing the mapping between FIWARE service/CartoDB usernames and CartoDB API Keys
cygnus-ngsi.sinks.cartodb-sink.keys_conf_file = /usr/cygnus/conf/cartodb_keys.conf
# if true the latitude and longitude values are exchanged, false otherwise
#cygnus-ngsi.sinks.cartodb-sink.swap_coordinates = true
# if true, a raw based storage is done, false otherwise
cygnus-ngsi.sinks.cartodb-sink.enable_raw = true
# if true, a distance based storage is done, false otherwise
cygnus-ngsi.sinks.cartodb-sink.enable_distance = false
# number of notifications to be included within a processing batch
#cygnus-ngsi.sinks.cartodb-sink.batch_size = 100
# timeout for batch accumulation
#cygnus-ngsi.sinks.cartodb-sink.batch_timeout = 30
# number of retries upon persistence error
#cygnus-ngsi.sinks.cartodb-sink.batch_ttl = 10
# maximum number of connections allowed for a Http-based HDFS backend
#cygnus-ngsi.sinks.cartodb-sink.backend.max_conns = 500
# maximum number of connections per route allowed for a Http-based HDFS backend
#cygnus-ngsi.sinks.cartodb-sink.backend.max_conns_per_route = 100

Related

How to pass "network_name" in gcloud sql instances patch command

cant add name of the authorized network whie using gcloud sql patch command
stage("Editing Authorized Networks of ${instance_name} CloudSQL") {
steps {
sh "gcloud sql instances patch ${instance_name} --authorized-networks ${network_name}=${ip_range}"
}
}
tried this
gcloud sql instances patch instanceid --authorized-networks Rajeev-Home=0.0.0.0/0
getting this
ERROR: (gcloud.sql.instances.patch) argument --authorized-networks: Bad value [Rajeev-Home=0.0.0.0/0]: Must be specified in CIDR notation, also known as 'slash' notation (e.g. 192.168.100.0/24).
Usage: gcloud sql instances patch INSTANCE [optional flags]
optional flags may be --activation-policy | --active-directory-domain |
--assign-ip | --async | --audit-bucket-path |
--audit-retention-interval | --audit-upload-interval |
--authorized-gae-apps | --authorized-networks |
--availability-type | --no-backup | --backup-location |
--backup-start-time | --clear-authorized-networks |
--clear-database-flags | --clear-gae-apps |
--clear-password-policy | --connector-enforcement |
--cpu | --database-flags | --database-version |
--deletion-protection |
--deny-maintenance-period-end-date |
--deny-maintenance-period-start-date |
--deny-maintenance-period-time | --diff |
--enable-bin-log | --enable-database-replication |
--enable-google-private-path |
--enable-password-policy |
--enable-point-in-time-recovery | --follow-gae-app |
--gce-zone | --help |
--insights-config-query-insights-enabled |
--insights-config-query-plans-per-minute |
--insights-config-query-string-length |
--insights-config-record-application-tags |
--insights-config-record-client-address |
--maintenance-release-channel | --maintenance-version |
--maintenance-window-any | --maintenance-window-day |
--maintenance-window-hour | --memory | --network |
--password-policy-complexity |
--password-policy-disallow-username-substring |
--password-policy-min-length |
--password-policy-password-change-interval |
--password-policy-reuse-interval | --pricing-plan |
--remove-deny-maintenance-period | --replication |
--require-ssl | --retained-backups-count |
--retained-transaction-log-days | --secondary-zone |
--storage-auto-increase | --storage-size | --tier |
--zone

Perl catalyst development server not accepting connections

Here is the stdout
[lzhou#localhost script]$ perl dbauthtest_server.pl You are loading Catalyst::Engine::HTTP explicitly.
This is almost certainly a bad idea, as Catalyst::Engine::HTTP has been removed in this version of Catalyst.
Please update your application's scripts with:
catalyst.pl -force -scripts MyApp
to update your scripts to not do this. [debug] Debug messages enabled [debug] Statistics enabled [warn] You are running an old script!
Please update by running (this will overwrite existing files):
catalyst.pl -force -scripts DBAuthTest
or (this will not overwrite existing files):
catalyst.pl -scripts DBAuthTest
[debug] Loaded Config "/home/lzhou/ä¸è¼ /def-guide-to-catalyst-master/4439/catalyst-book-code/Chapter_6/DBAuthTest/dbauthtest.conf" [debug] Loaded plugins: .--------------------------------------------------------------------------------. | Catalyst::Plugin::ConfigLoader 0.34 | '--------------------------------------------------------------------------------'
[debug] Loaded PSGI Middleware: .--------------------------------------------------------------------------------. | Catalyst::Middleware::Stash | | Plack::Middleware::HTTPExceptions | | Plack::Middleware::RemoveRedundantBody 0.06 | | Plack::Middleware::FixMissingBodyInRedirect 0.12 | | Plack::Middleware::ContentLength | | Plack::Middleware::MethodOverride 0.20 | | Plack::Middleware::Head | '--------------------------------------------------------------------------------'
[debug] Loaded Request Data Handlers: .--------------------------------------------------------------------------------. | application/json | | application/x-www-form-urlencoded | '--------------------------------------------------------------------------------'
[debug] Loaded dispatcher "Catalyst::Dispatcher" [debug] Loaded engine "Catalyst::Engine" [debug] Found home "/home/lzhou/ä¸è¼ /def-guide-to-catalyst-master/4439/catalyst-book-code/Chapter_6/DBAuthTest" [debug] Loaded components: .---------------------------------------------------------------------+----------. | Class | Type |
+---------------------------------------------------------------------+----------+ | DBAuthTest::Controller::AuthUsers | instance | | DBAuthTest::Controller::Root | instance | | DBAuthTest::Model::AuthDB | instance | | DBAuthTest::Model::AuthDB::Roles | class | | DBAuthTest::Model::AuthDB::UserRoles | class | | DBAuthTest::Model::AuthDB::Users | class | | DBAuthTest::View::Web | instance | '---------------------------------------------------------------------+----------'
[debug] Loaded Private actions: .----------------------+--------------------------------------+------------------. | Private | Class | Method |
+----------------------+--------------------------------------+------------------+ | /end | DBAuthTest::Controller::Root | end | | /default | DBAuthTest::Controller::Root | default | | /authusers/profile | DBAuthTest::Controller::AuthUsers | profile | | /authusers/base | DBAuthTest::Controller::AuthUsers | base | | /authusers/add | DBAuthTest::Controller::AuthUsers | add | | /authusers/user | DBAuthTest::Controller::AuthUsers | user | '----------------------+--------------------------------------+------------------'
[debug] Loaded Path actions: .---------------------------------------+----------------------------------------. | Path | Private |
+---------------------------------------+----------------------------------------+ | /... | /default | '---------------------------------------+----------------------------------------'
[debug] Loaded Chained actions: .---------------------------------------+----------------------------------------. | Path Spec | Private |
+---------------------------------------+----------------------------------------+ | /authusers/add | /authusers/base (0) | | | => /authusers/add (0) | | /authusers/*/profile | /authusers/base (0) | | | -> /authusers/user (1) | | | => /authusers/profile (0) | '---------------------------------------+----------------------------------------'
[info] DBAuthTest powered by Catalyst 5.90124
But when i visit http://localhost:3000/ , connection is refused . it appears that there is no server running.

How to list Rackspace servers filtered by metadata using REST API?

I can see that it is possible to add metadata to a Rackspace virtual machine instance.
I want to get a list of running instances, filtered by a particular metatag value.
I can't see how to do so in the documentation however.
is it possible?
You should be able to do so using the openstack client... but it depends on which metatag you're interested in.
You can get a list of all servers:
openstack server list
Will spit something like
+--------------------------------------+------------------+--------+-----------------------------------------------------------------------------------------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+------------------+--------+-----------------------------------------------------------------------------------------------------------+
| 97606ae9-7f18-4a3c-903a-1583d446119b | trysmallwin | ERROR | |
| cb78b8d5-2f03-4a3f-ab26-f389acbd0b76 | Win-try again | ERROR | public=2607:f298:5:101d:f816:3eff:fe9e:5cd4, 208.113.133.90, 2607:f298:5:101d:f816:3eff:fe36:da45, |
| | | | 208.113.133.93, 2607:f298:5:101d:f816:3eff:fe40:57d5, 208.113.133.95 |
| 040751d1-c4c5-47aa-8dec-1d69a468be1c | hnxhdkwskrvwvdwr | ACTIVE | public=2607:f298:5:101d:f816:3eff:fe60:324, 208.113.130.52 |
+--------------------------------------+------------------+--------+-----------------------------------------------------------------------------------------------------------+
note the ID of the server and investigate deeper:
openstack server show 040751d1-c4c5-47aa-8dec-1d69a468be1c
+--------------------------------------+------------------------------------------------------------+
| Field | Value |
+--------------------------------------+------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | iad-2 |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2016-07-26T17:32:01.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | public=2607:f298:5:101d:f816:3eff:fe60:324, 208.113.130.52 |
| config_drive | True |
| created | 2016-07-26T17:31:51Z |
| flavor | gp1.semisonic (50) |
| hostId | e1efd75d1e8f6a7f5bb228a35db13647281996087d39c65af8ce83d9 |
| id | 040751d1-c4c5-47aa-8dec-1d69a468be1c |
| image | Ubuntu-14.04 (03f89ff2-d66e-49f5-ae61-656a006bbbe9) |
| key_name | stef |
| name | hnxhdkwskrvwvdwr |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | d2fb6996496044158cf977c2129c8660 |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | ACTIVE |
| updated | 2016-07-26T17:32:01Z |
| user_id | 5b2ca246f39a425f9a833460bf322603 |
+--------------------------------------+------------------------------------------------------------+
openstack --f json will output the same stuff but in json format that you can more easily manipulate programmatically.
HTH

nova diagnostics in devstack development

In ssh, when I run this command
nova diagnostics 2ad0dda0-072d-46c4-8689-3c487a452248
I got all the resources in devstack
+---------------------------+----------------------+
| Property | Value |
+---------------------------+----------------------+
| cpu0_time | 3766640000000 |
| hdd_errors | 18446744073709551615 |
| hdd_read | 111736 |
| hdd_read_req | 73 |
| hdd_write | 0 |
| hdd_write_req | 0 |
| memory | 2097152 |
| memory-actual | 2097152 |
| memory-available | 1922544 |
| memory-major_fault | 2710 |
| memory-minor_fault | 10061504 |
| memory-rss | 509392 |
| memory-swap_in | 0 |
| memory-swap_out | 0 |
| memory-unused | 1079468 |
| tap5a148e0f-b8_rx | 959777 |
| tap5a148e0f-b8_rx_drop | 0 |
| tap5a148e0f-b8_rx_errors | 0 |
| tap5a148e0f-b8_rx_packets | 8758 |
| tap5a148e0f-b8_tx | 48872 |
| tap5a148e0f-b8_tx_drop | 0 |
| tap5a148e0f-b8_tx_errors | 0 |
| tap5a148e0f-b8_tx_packets | 615 |
| vda_errors | 18446744073709551615 |
| vda_read | 597230592 |
| vda_read_req | 31443 |
| vda_write | 164690944 |
| vda_write_req | 18422 |
+---------------------------+----------------------+
How can I get this in devstack user interfaces.
Please help..
Thanks in advance
its not available in openstack icehouse/juno version though it can be edited in juno to retrieve in devstack.
I didn't use openstack Kilo. In juno, if your hypervisor is libvirt, Vsphere or XenAPI then you can retrive this statistics in devstack UI. for this you have to do this:
For Libvirt
In this location ceilometer/compute/virt/libvirt/inspector.py, add this:
from oslo.utils import units
from ceilometer.compute.pollsters import util
def inspect_memory_usage(self, instance, duration=None):
instance_name = util.instance_name(instance)
domain = self._lookup_by_name(instance_name)
state = domain.info()[0]
if state == libvirt.VIR_DOMAIN_SHUTOFF:
LOG.warn(_('Failed to inspect memory usage of %(instance_name)s, '
'domain is in state of SHUTOFF'),
{'instance_name': instance_name})
return
try:
memory_stats = domain.memoryStats()
if (memory_stats and
memory_stats.get('available') and
memory_stats.get('unused')):
memory_used = (memory_stats.get('available') -
memory_stats.get('unused'))
# Stat provided from libvirt is in KB, converting it to MB.
memory_used = memory_used / units.Ki
return virt_inspector.MemoryUsageStats(usage=memory_used)
else:
LOG.warn(_('Failed to inspect memory usage of '
'%(instance_name)s, can not get info from libvirt'),
{'instance_name': instance_name})
# memoryStats might launch an exception if the method
# is not supported by the underlying hypervisor being
# used by libvirt
except libvirt.libvirtError as e:
LOG.warn(_('Failed to inspect memory usage of %(instance_name)s, '
'can not get info from libvirt: %(error)s'),
{'instance_name': instance_name, 'error': e})
for more details you can check the following link:
https://review.openstack.org/#/c/90498/

Unable to start HandlerSocket with mariadb

For some reason, I cannot get HandlerSocket to start listening when I start mariadb (version
10.0.14). I am using Cent OS 6.5.
my.cnf has the following settings:
handlersocket_port = 9998
handlersocket_port_wr = 9999
handlersocket_address = 127.0.0.1
Calling "SHOW GLOBAL VARIABLES LIKE 'handlersocket%'" from the mariaDb prompt shows:
+-------------------------------+-----------+
| Variable_name | Value |
+-------------------------------+-----------+
| handlersocket_accept_balance | 0 |
| handlersocket_address | 127.0.0.1 |
| handlersocket_backlog | 32768 |
| handlersocket_epoll | 1 |
| handlersocket_plain_secret | |
| handlersocket_plain_secret_wr | |
| handlersocket_port | 9998 |
| handlersocket_port_wr | 9999 |
| handlersocket_rcvbuf | 0 |
| handlersocket_readsize | 0 |
| handlersocket_sndbuf | 0 |
| handlersocket_threads | 16 |
| handlersocket_threads_wr | 1 |
| handlersocket_timeout | 300 |
| handlersocket_verbose | 10 |
| handlersocket_wrlock_timeout | 12 |
+-------------------------------+-----------+
I can start mariadb successfully, but when I check to see which ports are actively listening,
neither 9998 nor 9999 show up. I've checked the mysqld.log file, but no errors seem to be occurring.
Answering my own question here -
SELINUX needed to be set to permissive mode to get HandlerSocket started.