Dears
I am using kannel 1.5.0 gateway with smpp on RHEL6 and when I receive an sms I get these errors:
2016-01-28 13:28:07 [8613] [6] WARNING: Could not convert GSM (0xd4) to Unicode.
2016-01-28 13:28:07 [8613] [6] WARNING: Could not convert GSM (0xf2) to Unicode.
.....
and I receive the messages incorrectly to my application, here is the request captured:
http://127.0.0.1:9091/services/smsReceive?msisdn=%2B353872849216&coding=0&smsText=%C3%85%3CH%C3%B9a%C3%91%C3%B9%25evM%C3%B9)zX%C3%ACp&DCS=-1&charset=UTF-8'
and this is my kannel configuration:
group = core
admin-port = 13001
smsbox-port = 13002
admin-password = bar
log-file = "/home/user/logs/kannellogs/SmscGateway.log"
log-level = 0
box-deny-ip = "*.*.*.*"
box-allow-ip = "127.0.0.1;172.*.*.*;192.*.*.*;10.*.*.*"
admin-allow-ip = "127.0.0.1;172.*.*.*;192.*.*.*;10.*.*.*"
admin-deny-ip = "*.*.*.*"
access-log = "/home/user/logs/kannellogs/access.log"
# SMSBOX SETUP
group = smsbox
bearerbox-host = localhost
sendsms-port = 13013
log-file="/home/user/logs/kannellogs/smsbox.log"
log-level = 0
access-log="/home/user/logs/kannellogs/sms_access.log"
reply-couldnotfetch = "Service is down, please try again later.(notfetch)"
reply-couldnotrepresent = "Service is down, please try again later.(notrepresent)"
reply-requestfailed = "Service is down, please try again later.(failed)"
reply-emptymessage = ""
mo-recode = true
# SEND-SMS USERS
group = sendsms-user
username=test
password=test
user-allow-ip = "*.*.*.*"
concatenation = true
split-chars = "#!^&*("
max-messages = 10
# SMPP PARAMETERS for SMSC account
group = smsc
smsc = smpp
smsc-id =Smsc12345
smsc-username = Voda
smsc-password = 12345678
host = 123.222.111.11
port = 1040
system-type = Vodafone403
interface-version = 34
source-addr-autodetect = false
source-addr-ton = 0
source-addr-npi = 1
dest-addr-ton = 1
dest-addr-npi = 1
reconnect-delay = false
reconnect-delay = 10
transceiver-mode = true
throughput = 10
address-range = "^12345$"
max-pending-submits = 3
group = sms-service
accepted-smsc = "Smsc12345"
keyword = default
get-url = "http://127.0.0.1:9091/services/smsReceive?msisdn=%p&coding=%c&smsText=%a&DCS=%m&charset=%C"
catch-all=true
max-messages = 0
I am new to kannel please help if i am doing anything wrong
You should check Kannel docs;
for a "normal" message, it will be "GSM" (coding=0), "binary" (coding=1) or "UTF-16BE" (coding=2)
What I see in url is that
&coding=0
what should be:
&coding=2
and also take care that it is url encoded correctly and about length of unicode message (if you are using aggregators not all support concatenation and long messages)
Hope it helps.
Vedran
Related
Is there a way to reload the code without restarting Zope when in Production ?
New features are implemented almost once in 2 days and have to be uploaded to the server. The only way it works currently is by restarting the zeo server and all instances. Can't use "plone.reload" as it only works in the development environment when the debug mode is on. Below is the buildout.cfg content
[buildout]
parts =
# instance
zeo
client1
client2
client3
zopepy
zopeskel
test
# mysql
# varnish-build
# varnish
supervisor
pidproxy
extends =
https://dist.plone.org/versions/zope-2-13-19-versions.cfg
find-links =
https://dist.plone.org/release/4.2.4
https://dist.plone.org/thirdparty
extensions =
mr.developer
# buildout.dumppickedversions
sources = sources
versions = versions
develop =
[versions]
plone.recipe.zeoserver = 1.3.1
plone.recipe.zope2instance = 4.2.8
five.localsitemanager = 2.0.5
Products.PluginRegistry = 1.3
Products.CMFCore = 2.2.7
Products.GenericSetup = 1.7.3
Products.ZSQLMethods = 2.13.4
zope.interface = 3.6.7
zope.app.publication = 3.12.0
#setuptools = 17.1.1
funcsigs = 0.4
openpyxl = 2.4.0
plone.reload = 2.0.2
[zeo]
recipe = plone.recipe.zeoserver
zeo-address = 127.0.0.1:9100
zeo-var = ${buildout:directory}/var
blob-storage = ${zeo:zeo-var}/blobstorage
#ggs = plone.app.blob
[client1]
recipe = plone.recipe.zope2instance
http-address = 9081
zeo-client = on
zeo-address = ${zeo:zeo-address}
shared-blob = on
blob-storage = ${zeo:zeo-var}/blobstorage
user = admin:Slick_RP#21!
products = ${buildout:directory}/matrix_git/prod/
debug-mode = off
verbose-security = off
eggs =
# pillow
mysql-python
simplejson
haversine
openpyxl
requests
httpagentparser
ordereddict
python-memcached
# python-crontab
# setuptools
Products.CMFCore
Products.ZMySQLDA
# Products.SQLAlchemyDA
Products.PluggableAuthService
# Products.ZopeProfiler
# Products.MemoryProfiler
# reportlab
Products.BeakerSessionDataManager
collective.fsexternalmethod
plone.reload
zope-conf-additional =
extensions ${buildout:directory}/matrix_git/Extensions
<product-config beaker>
session.type file
session.data_dir ${buildout:directory}/var/sessions/data
session.lock_dir ${buildout:directory}/var/sessions/lock
session.key beaker.session
session.secret secret
</product-config>
zcml =
collective.fsexternalmethod
plone.reload
event-log-max-size = 5 MB
event-log-old-files = 5
access-log-max-size = 20 MB
access-log-old-files = 10
[client2]
recipe = plone.recipe.zope2instance
http-address = 9082
zeo-client = ${client1:zeo-client}
zeo-address = ${client1:zeo-address}
blob-storage = ${client1:blob-storage}
shared-blob = ${client1:shared-blob}
user = ${client1:user}
products = ${client1:products}
debug-mode = off
verbose-security = off
eggs = ${client1:eggs}
zcml = ${client1:zcml}
zope-conf-additional = ${client1:zope-conf-additional}
event-log-max-size = ${client1:event-log-max-size}
event-log-old-files = ${client1:event-log-old-files}
access-log-max-size = ${client1:access-log-max-size}
access-log-old-files = ${client1:access-log-old-files}
[client3]
recipe = plone.recipe.zope2instance
http-address = 9083
zeo-client = ${client1:zeo-client}
zeo-address = ${client1:zeo-address}
blob-storage = ${client1:blob-storage}
shared-blob = ${client1:shared-blob}
user = ${client1:user}
products = ${client1:products}
debug-mode = off
verbose-security = off
eggs = ${client1:eggs}
zcml = ${client1:zcml}
zope-conf-additional = ${client1:zope-conf-additional}
event-log-max-size = ${client1:event-log-max-size}
event-log-old-files = ${client1:event-log-old-files}
access-log-max-size = ${client1:access-log-max-size}
access-log-old-files = ${client1:access-log-old-files}
[zopepy]
recipe = zc.recipe.egg
eggs = ${client1:eggs}
interpreter = zopepy
scripts = zopepy
[test]
recipe = zc.recipe.testrunner
defaults = ['--auto-color', '--auto-progress']
eggs =
${client1:eggs}
[zopeskel]
recipe = zc.recipe.egg
eggs =
ZopeSkel
PasteScript
[mysql]
recipe = zest.recipe.mysql
# Note that these urls usually stop working after a while... thanks...
mysql-url = http://downloads.mysql.com/archives/mysql-5.0/mysql-5.0.86.tar.gz
mysql-python-url = http://pypi.python.org/packages/source/M/MySQL-python/MySQL-python-1.2.3.tar.gz
[varnish-build]
recipe = zc.recipe.cmmi
url = ${varnish:download-url}
[varnish]
recipe = plone.recipe.varnish
daemon = ${buildout:parts-directory}/varnish-build/sbin/varnishd
bind = 127.0.0.1:8000
backends = 127.0.0.1:8080
cache-size = 50M
[pidproxy]
recipe = zc.recipe.egg
eggs = supervisor
scripts = pidproxy
[supervisor]
recipe = collective.recipe.supervisor
port = 127.0.0.1:24007
serverurl = http://127.0.0.1:24007
programs =
# 10 mysql ${buildout:directory}/bin/pidproxy [${buildout:directory}/var/mysql/mysql.pid ${buildout:directory}/parts/mysql/install/bin/mysqld_safe --pid-file=${buildout:directory}/var/mysql/mysql.pid --socket=${buildout:directory}/var/mysql.socket] ${buildout:directory} true
20 zeo ${buildout:directory}/bin/zeo [console] ${buildout:directory} true
30 client1 ${buildout:directory}/bin/client1 [console] ${buildout:directory} true
40 client2 ${buildout:directory}/bin/client2 [console] ${buildout:directory} true
50 client3 ${buildout:directory}/bin/client3 [console] ${buildout:directory} true
If you are deploying so frequently, you can either deploy at low traffic times (i.e. at night).
If the website should be always up, you could have two sets of Plone instances: one set is active and serving requests, the second one is not active.
When updating, the offline servers are updated and when they are done, a switch is turned (HAProxy for example) to replace the active servers.
You could even have all servers available always, but for updating, put some offline while they are updated.
As others, and you as well are pointing, I would never use plone.reload or similar development tools in production.
Yes there is a way, allthough I'd never do that in production it's a great time-saver when developing, to do a reload within a browser-view:
from plone.reload.code import reload_code
from Products.Five.browser import BrowserView
class View(BrowserView):
def __call__(self):
reload_code()
return 'Code loaded.'
Then call the view with the name you registered it with upon the site. This even works in non-debug-mode while the instance is running in background. Tested with a standalone instance (non-ZEO).
So I have installed Cygnus and in the simple test configuration case which I took from here (https://github.com/telefonicaid/fiware-cygnus/blob/master/cygnus-ngsi/README.md) everything works fine.
But I need Postgresql as a backend for my application.
For this I adjusted the agent_1.conf file with all postgresql parameters found from http://fiware-cygnus.readthedocs.io/en/latest/cygnus-ngsi/installation_and_administration_guide/ngsi_agent_conf/
cygnus-ngsi.sources = http-source
cygnus-ngsi.sinks = postgresql-sink
cygnus-ngsi.channels = postgresql-channel
cygnus-ngsi.sources.http-source.channels = hdfs-channel mysql-channel ckan-channel mongo-channel sth-channel kafka-channel dynamo-channel postgresql-channel
cygnus-ngsi.sources.http-source.type = org.apache.flume.source.http.HTTPSource
cygnus-ngsi.sources.http-source.port = 5050
cygnus-ngsi.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.NGSIRestHandler
cygnus-ngsi.sources.http-source.handler.notification_target = /notify
cygnus-ngsi.sources.http-source.handler.default_service = default
cygnus-ngsi.sources.http-source.handler.default_service_path = /
cygnus-ngsi.sources.http-source.interceptors = ts gi
cygnus-ngsi.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.NGSIGroupingInterceptor$Builder
cygnus-ngsi.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
cygnus-ngsi.sinks.postgresql-sink.channel = postgresql-channel
cygnus-ngsi.sinks.postgresql-sink.type = com.telefonica.iot.cygnus.sinks.NGSIPostgreSQLSink
cygnus-ngsi.sinks.postgresql-sink.postgresql_host = 127.0.0.1
cygnus-ngsi.sinks.postgresql-sink.postgresql_port = 5432
cygnus-ngsi.sinks.postgresql-sink.postgresql_database = myUser
cygnus-ngsi.sinks.postgresql-sink.postgresql_username = mydb
cygnus-ngsi.sinks.postgresql-sink.postgresql_password = xxxx
cygnus-ngsi.sinks.postgresql-sink.attr_persistence = row
cygnus-ngsi.sinks.postgresql-sink.batch_size = 100
cygnus-ngsi.sinks.postgresql-sink.batch_timeout = 30
cygnus-ngsi.sinks.postgresql-sink.batch_ttl = 10
# postgresql-channel configuration
cygnus-ngsi.channels.postgresql-channel.type = memory
cygnus-ngsi.channels.postgresql-channel.capacity = 1000
cygnus-ngsi.channels.postgresql-channel.transactionCapacity = 100
I didn'r really find any information about other files I am supposed to change and aren't really sure if all parameters are correct.
I also tried the sample configuration from here http://fiware-cygnus.readthedocs.io/en/latest/cygnus-ngsi/flume_extensions_catalogue/ngsi_postgresql_sink/index.html
Cygnus seems to start correctly but all if I try to send a notification I get connection refused
Befor doing anything, please, create the database, the user and the password in Postgresql.
The cygnus configuration is like this one. The /etc/cygnus/conf/cygnus_instance_1.conf file:
CYGNUS_USER=cygnus
CONFIG_FOLDER=/usr/cygnus/conf
CONFIG_FILE=/usr/cygnus/conf/agent_1.conf
AGENT_NAME=cygnus-ngsi
LOGFILE_NAME=cygnus.log
ADMIN_PORT=8081
POLLING_INTERVAL=30
So, the other file /usr/cygnus/conf/agent_1.conf is like this one (please change PostgreSQL parameters):
cygnus-ngsi.sources = http-source
cygnus-ngsi.sinks = postgresql-sink
cygnus-ngsi.channels = postgresql-channel
cygnus-ngsi.sources.http-source.channels = postgresql-channel
cygnus-ngsi.sources.http-source.type = org.apache.flume.source.http.HTTPSource
cygnus-ngsi.sources.http-source.port = 5050
cygnus-ngsi.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.NGSIRestHandler
cygnus-ngsi.sources.http-source.handler.notification_target = /notify
cygnus-ngsi.sources.http-source.handler.default_service = default
cygnus-ngsi.sources.http-source.handler.default_service_path = /
cygnus-ngsi.sources.http-source.handler.events_ttl = 10
cygnus-ngsi.sources.http-source.interceptors = ts gi
cygnus-ngsi.sources.http-source.interceptors.ts.type = timestamp
cygnus-ngsi.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.NGSIGroupingInterceptor$Builder
#cygnus-ngsi.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
# =============================================
# postgresql-channel configuration
# channel type (must not be changed)
cygnus-ngsi.channels.postgresql-channel.type = memory
# capacity of the channel
cygnus-ngsi.channels.postgresql-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnus-ngsi.channels.postgresql-channel.transactionCapacity = 100
# ============================================
# NGSIPostgreSQLSink configuration
# channel name from where to read notification events
cygnus-ngsi.sinks.postgresql-sink.channel = postgresql-channel
# sink class, must not be changed
cygnus-ngsi.sinks.postgresql-sink.type = com.telefonica.iot.cygnus.sinks.NGSIPostgreSQLSink
# true applies the new encoding, false applies the old encoding.
# cygnus-ngsi.sinks.postgresql-sink.enable_encoding = false
# true if the grouping feature is enabled for this sink, false otherwise
cygnus-ngsi.sinks.postgresql-sink.enable_grouping = false
# true if name mappings are enabled for this sink, false otherwise
cygnus-ngsi.sinks.postgresql-sink.enable_name_mappings = false
# true if lower case is wanted to forced in all the element names, false otherwise
# cygnus-ngsi.sinks.postgresql-sink.enable_lowercase = false
# the FQDN/IP address where the PostgreSQL server runs
cygnus-ngsi.sinks.postgresql-sink.postgresql_host = 127.0.0.1
# the port where the PostgreSQL server listens for incomming connections
cygnus-ngsi.sinks.postgresql-sink.postgresql_port = 5432
# the name of the postgresql database
cygnus-ngsi.sinks.postgresql-sink.postgresql_database = cygnusdb
# a valid user in the PostgreSQL server
cygnus-ngsi.sinks.postgresql-sink.postgresql_username = cygnus
# password for the user above
cygnus-ngsi.sinks.postgresql-sink.postgresql_password = cygnusdb
# how the attributes are stored, either per row either per column (row, column)
cygnus-ngsi.sinks.postgresql-sink.attr_persistence = row
# select the data_model: dm-by-service-path or dm-by-entity
cygnus-ngsi.sinks.postgresql-sink.data_model = dm-by-entity
# number of notifications to be included within a processing batch
cygnus-ngsi.sinks.postgresql-sink.batch_size = 1
# timeout for batch accumulation
cygnus-ngsi.sinks.postgresql-sink.batch_timeout = 30
# number of retries upon persistence error
cygnus-ngsi.sinks.postgresql-sink.batch_ttl = 0
# true enables cache, false disables cache
cygnus-ngsi.sinks.postgresql-sink.backend.enable_cache = true
Good Morning!
Currently I have set up my structure in Fiware saving my historical records in MongoDB, for this I have been using Mlab as a hosting.
I attache the configuration file of my agent, the problem comes in that due to the mandatory character "/" of the service path I can not access the generated historical data, since it is a character not allowed for collections in MongoDB.
agent_1.conf
cygnus-ngsi.sources = http-source
cygnus-ngsi.sinks = mongo-sink
cygnus-ngsi.channels = mongo-channel
cygnus-ngsi.sources.http-source.channels = mongo-channel
cygnus-ngsi.sources.http-source.type = org.apache.flume.source.http.HTTPSource
cygnus-ngsi.sources.http-source.port = 5050
cygnus-ngsi.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.NGSIRestHandler
cygnus-ngsi.sources.http-source.handler.notification_target = /notify
cygnus-ngsi.sources.http-source.handler.default_service = default
cygnus-ngsi.sources.http-source.handler.default_service_path = /sevilla
cygnus-ngsi.sources.http-source.handler.events_ttl = 2
cygnus-ngsi.sources.http-source.interceptors = ts
cygnus-ngsi.sources.http-source.interceptors.ts.type = timestamp
cygnus-ngsi.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.NGSIMongoSink
cygnus-ngsi.sinks.mongo-sink.channel = mongo-channel
cygnus-ngsi.sinks.mongo-sink.enable_encoding = false
cygnus-ngsi.sinks.mongo-sink.enable_grouping = false
cygnus-ngsi.sinks.mongo-sink.enable_name_mappings = false
cygnus-ngsi.sinks.mongo-sink.enable_lowercase = false
cygnus-ngsi.sinks.mongo-sink.data_model = dm-by-service-path
cygnus-ngsi.sinks.mongo-sink.attr_persistence = row
cygnus-ngsi.sinks.mongo-sink.mongo_hosts = ds******.mlab.com:35866
cygnus-ngsi.sinks.mongo-sink.mongo_username = my_user
cygnus-ngsi.sinks.mongo-sink.mongo_password = ********
cygnus-ngsi.sinks.mongo-sink.db_prefix = sth_
cygnus-ngsi.sinks.mongo-sink.collection_prefix = sth_
cygnus-ngsi.sinks.mongo-sink.batch_size = 1
cygnus-ngsi.sinks.mongo-sink.batch_timeout = 30
cygnus-ngsi.sinks.mongo-sink.batch_ttl = 10
cygnus-ngsi.sinks.mongo-sink.data_expiration = 0
cygnus-ngsi.sinks.mongo-sink.collections_size = 0
cygnus-ngsi.sinks.mongo-sink.max_documents = 0
cygnus-ngsi.sinks.mongo-sink.ignore_white_spaces = true
cygnus-ngsi.channels.mongo-channel.type = com.telefonica.iot.cygnus.channels.CygnusMemoryChannel
cygnus-ngsi.channels.mongo-channel.capacity = 1000
cygnus-ngsi.channels.mongo-channel.transactionCapacity = 100
Is there any way for Cygnus to remove the "/" character from the service path?
Error: http://www.subirimagenes.com/imagedata.php?url=http://s2.subirimagenes.com/imagen/9827048captura-de-pantalla.png
SOLUTION: You just have to change the enconding to true in the agent configuration
cygnus-ngsi.sinks.mongo-sink.enable_encoding = true
Thank you very much!
In Ganglia, I have configured a 2 clusters. cluster A has 2 nodes, cluster B has 13 nodes respectively. cluster B works well, while cluster A only has 1 node shown. The other node has exactly the same gmond.conf file, which is shown below:
globals {
daemonize = yes
setuid = yes
user = ganglia
debug_level = 0
max_udp_msg_len = 1472
mute = no
deaf = no
host_dmax = 0 /*secs */
cleanup_threshold = 300 /*secs */
gexec = no
send_metadata_interval = 0
}
cluster {
#name = "unspecified"
name = "rpt"
owner = "unspecified"
latlong = "unspecified"
url = "unspecified"
}
host {
location = "unspecified"
}
udp_send_channel {
#mcast_join = 239.2.11.71
host = qt-dw-master
port = 8557
ttl = 1
}
/*
udp_recv_channel {
#mcast_join = 239.2.11.71
port = 8557
#bind = 239.2.11.71
#bind = qt-dw-master
}
*/
tcp_accept_channel {
port = 8557
}
gmetad.conf on qt-dw-master is shown below:
data_source "rpt" 60 rpt0:8557 rpt1-db:8557
I have tried using multicasting, but does not work. I also want to find log files of gmond, but failed. Anyone can help on this problem?
Are all gmond running in the cluster A? Use commond service gmond status to confirm it
Trac does not appear to be triggering an email event when submitting a change to a ticket with a cc'ed email address. I currently have logging enabled, and I'm not finding anything pertaining to email in the trac.log file. I'm also monitoring /var/log/maillog - nothing appears there either.
I am not using SMTP. I'm attempting to configure Trac to use sendmail.
My current [notifications] section:
admit_domains =
always_notify_owner = false
always_notify_reporter = false
always_notify_updater = true
email_sender = SmtpEmailSender
ignore_domains =
mime_encoding = none
sendmail_path = /usr/sbin/sendmail
smtp_always_bcc = known_good_address#domain.tld
smtp_always_cc =
smtp_default_domain =
smtp_enabled = false
smtp_from = trac#localhost
smtp_from_name =
smtp_password =
smtp_port = 25
smtp_replyto = trac#localhost
smtp_server = localhost
smtp_subject_prefix = __default__
smtp_user =
ticket_subject_template = $prefix #$ticket.id: $summary
use_public_cc = false
use_short_addr = false
use_tls = false
Path to sendmail is correct:
[box]# which sendmail
/usr/sbin/sendmail
Should all of the smtp references be removed from the trac.ini [notification] section in order for Trac's sendmail hook to work?
It should be as easy as changing
smtp_enabled = false
to
smtp_enabled = true