!htrace shows no callstack - windbg

When I use !htrace -diff in WinDbg to debug a handle leak, I get a lot of handles (probably the ones that are leaking) that do not show a callstack:
What could be a reason for this and what options do I have to debug this further?
Handle = 0x000273e4 - OPEN
Thread ID = 0x00001190, Process ID = 0x0000114c
--------------------------------------
Handle = 0x000273e0 - OPEN
Thread ID = 0x00001190, Process ID = 0x0000114c
--------------------------------------
Handle = 0x000273dc - OPEN
Thread ID = 0x00001190, Process ID = 0x0000114c
--------------------------------------
Handle = 0x000273d8 - OPEN
Thread ID = 0x00001190, Process ID = 0x0000114c
--------------------------------------
Handle = 0x000273d4 - OPEN
Thread ID = 0x00001190, Process ID = 0x0000114c
--------------------------------------
Handle = 0x000273d0 - OPEN
Thread ID = 0x00001190, Process ID = 0x0000114c
--------------------------------------
Handle = 0x000273cc - OPEN
Thread ID = 0x00001190, Process ID = 0x0000114c
--------------------------------------
Handle = 0x000273c8 - OPEN
Thread ID = 0x00001190, Process ID = 0x0000114c
--------------------------------------
Handle = 0x000273c4 - OPEN
Thread ID = 0x00001190, Process ID = 0x0000114c
--------------------------------------
Handle = 0x000273c0 - OPEN
Thread ID = 0x00001190, Process ID = 0x0000114c
--------------------------------------
Handle = 0x000273bc - OPEN
Thread ID = 0x00001190, Process ID = 0x0000114c
--------------------------------------
Handle = 0x000273b8 - OPEN
Thread ID = 0x00001190, Process ID = 0x0000114c
--------------------------------------
Handle = 0x000273b4 - OPEN
Thread ID = 0x00001190, Process ID = 0x0000114c
--------------------------------------
Handle = 0x000273b0 - OPEN
Thread ID = 0x00001190, Process ID = 0x0000114c
--------------------------------------
Handle = 0x000273ac - OPEN
Thread ID = 0x00001190, Process ID = 0x0000114c
--------------------------------------
Handle = 0x000273a8 - OPEN
Thread ID = 0x00001190, Process ID = 0x0000114c
--------------------------------------
Handle = 0x000273a4 - OPEN
Thread ID = 0x00001190, Process ID = 0x0000114c
--------------------------------------
Update: The handle leak seems to be depending on graphic drivers or graphic cards. It starts to leak when I use any form of WPF it only leaks on some Windows XP systems with a certain graphic cards/drivers.

Calls are performed in kernel mode by ZwOpenProcess routine (http://msdn.microsoft.com/en-us/library/windows/hardware/ff567022(v=vs.85).aspx) and not followed by a ZwClose call. Then the handle leaks. You don't see the callstacks because they are only available when calls are performed from user mode (OpenProcess / CloseHandle).
On an XP SP3, it seems difficult to find the culprit. The solution would be to use the 'Object reference tracing' functionality built in the OS, but this path is paved with issues (see http://www.osronline.com/showthread.cfm?link=198302 for further references). Since you found out this issue arises only when a particular video card is present, you can try to contact the vendor or check for a newer version of the driver.

Related

Django Sending Email Using Signals

I'm testing django signal to send an email but I'm getting the following error.
'list' object has no attribute 'splitlines'
#receiver(post_save, sender=Booking)
def new_booking(sender, instance, **kwargs):
if instance.firstname:
firstname = [instance.firstname]
# lastname = [instance.lastname]
email = [instance.email]
# phone = [instance.phone]
subject = [instance.service]
# date = [instance.date]
# time = [instance.time]
# fullname = [firstname + lastname]
# details = [service]
send_mail(firstname, subject, email,
['cmadiam#abc.com'], fail_silently=False)
Do i miss something?
Thanks again!
Got this working... if someone needs it... here's the code...
from .models import Booking
#receiver(post_save, sender=Booking)
def new_booking(sender, instance, **kwargs):
if instance.firstname:
firstname = (instance.firstname)
email = (instance.email)
subject = (instance.service)
send_mail(firstname, subject, email,
['cmadiam#abc.com'], fail_silently=False)

Zope/Plone code reload after deployment on production

Is there a way to reload the code without restarting Zope when in Production ?
New features are implemented almost once in 2 days and have to be uploaded to the server. The only way it works currently is by restarting the zeo server and all instances. Can't use "plone.reload" as it only works in the development environment when the debug mode is on. Below is the buildout.cfg content
[buildout]
parts =
# instance
zeo
client1
client2
client3
zopepy
zopeskel
test
# mysql
# varnish-build
# varnish
supervisor
pidproxy
extends =
https://dist.plone.org/versions/zope-2-13-19-versions.cfg
find-links =
https://dist.plone.org/release/4.2.4
https://dist.plone.org/thirdparty
extensions =
mr.developer
# buildout.dumppickedversions
sources = sources
versions = versions
develop =
[versions]
plone.recipe.zeoserver = 1.3.1
plone.recipe.zope2instance = 4.2.8
five.localsitemanager = 2.0.5
Products.PluginRegistry = 1.3
Products.CMFCore = 2.2.7
Products.GenericSetup = 1.7.3
Products.ZSQLMethods = 2.13.4
zope.interface = 3.6.7
zope.app.publication = 3.12.0
#setuptools = 17.1.1
funcsigs = 0.4
openpyxl = 2.4.0
plone.reload = 2.0.2
[zeo]
recipe = plone.recipe.zeoserver
zeo-address = 127.0.0.1:9100
zeo-var = ${buildout:directory}/var
blob-storage = ${zeo:zeo-var}/blobstorage
#ggs = plone.app.blob
[client1]
recipe = plone.recipe.zope2instance
http-address = 9081
zeo-client = on
zeo-address = ${zeo:zeo-address}
shared-blob = on
blob-storage = ${zeo:zeo-var}/blobstorage
user = admin:Slick_RP#21!
products = ${buildout:directory}/matrix_git/prod/
debug-mode = off
verbose-security = off
eggs =
# pillow
mysql-python
simplejson
haversine
openpyxl
requests
httpagentparser
ordereddict
python-memcached
# python-crontab
# setuptools
Products.CMFCore
Products.ZMySQLDA
# Products.SQLAlchemyDA
Products.PluggableAuthService
# Products.ZopeProfiler
# Products.MemoryProfiler
# reportlab
Products.BeakerSessionDataManager
collective.fsexternalmethod
plone.reload
zope-conf-additional =
extensions ${buildout:directory}/matrix_git/Extensions
<product-config beaker>
session.type file
session.data_dir ${buildout:directory}/var/sessions/data
session.lock_dir ${buildout:directory}/var/sessions/lock
session.key beaker.session
session.secret secret
</product-config>
zcml =
collective.fsexternalmethod
plone.reload
event-log-max-size = 5 MB
event-log-old-files = 5
access-log-max-size = 20 MB
access-log-old-files = 10
[client2]
recipe = plone.recipe.zope2instance
http-address = 9082
zeo-client = ${client1:zeo-client}
zeo-address = ${client1:zeo-address}
blob-storage = ${client1:blob-storage}
shared-blob = ${client1:shared-blob}
user = ${client1:user}
products = ${client1:products}
debug-mode = off
verbose-security = off
eggs = ${client1:eggs}
zcml = ${client1:zcml}
zope-conf-additional = ${client1:zope-conf-additional}
event-log-max-size = ${client1:event-log-max-size}
event-log-old-files = ${client1:event-log-old-files}
access-log-max-size = ${client1:access-log-max-size}
access-log-old-files = ${client1:access-log-old-files}
[client3]
recipe = plone.recipe.zope2instance
http-address = 9083
zeo-client = ${client1:zeo-client}
zeo-address = ${client1:zeo-address}
blob-storage = ${client1:blob-storage}
shared-blob = ${client1:shared-blob}
user = ${client1:user}
products = ${client1:products}
debug-mode = off
verbose-security = off
eggs = ${client1:eggs}
zcml = ${client1:zcml}
zope-conf-additional = ${client1:zope-conf-additional}
event-log-max-size = ${client1:event-log-max-size}
event-log-old-files = ${client1:event-log-old-files}
access-log-max-size = ${client1:access-log-max-size}
access-log-old-files = ${client1:access-log-old-files}
[zopepy]
recipe = zc.recipe.egg
eggs = ${client1:eggs}
interpreter = zopepy
scripts = zopepy
[test]
recipe = zc.recipe.testrunner
defaults = ['--auto-color', '--auto-progress']
eggs =
${client1:eggs}
[zopeskel]
recipe = zc.recipe.egg
eggs =
ZopeSkel
PasteScript
[mysql]
recipe = zest.recipe.mysql
# Note that these urls usually stop working after a while... thanks...
mysql-url = http://downloads.mysql.com/archives/mysql-5.0/mysql-5.0.86.tar.gz
mysql-python-url = http://pypi.python.org/packages/source/M/MySQL-python/MySQL-python-1.2.3.tar.gz
[varnish-build]
recipe = zc.recipe.cmmi
url = ${varnish:download-url}
[varnish]
recipe = plone.recipe.varnish
daemon = ${buildout:parts-directory}/varnish-build/sbin/varnishd
bind = 127.0.0.1:8000
backends = 127.0.0.1:8080
cache-size = 50M
[pidproxy]
recipe = zc.recipe.egg
eggs = supervisor
scripts = pidproxy
[supervisor]
recipe = collective.recipe.supervisor
port = 127.0.0.1:24007
serverurl = http://127.0.0.1:24007
programs =
# 10 mysql ${buildout:directory}/bin/pidproxy [${buildout:directory}/var/mysql/mysql.pid ${buildout:directory}/parts/mysql/install/bin/mysqld_safe --pid-file=${buildout:directory}/var/mysql/mysql.pid --socket=${buildout:directory}/var/mysql.socket] ${buildout:directory} true
20 zeo ${buildout:directory}/bin/zeo [console] ${buildout:directory} true
30 client1 ${buildout:directory}/bin/client1 [console] ${buildout:directory} true
40 client2 ${buildout:directory}/bin/client2 [console] ${buildout:directory} true
50 client3 ${buildout:directory}/bin/client3 [console] ${buildout:directory} true
If you are deploying so frequently, you can either deploy at low traffic times (i.e. at night).
If the website should be always up, you could have two sets of Plone instances: one set is active and serving requests, the second one is not active.
When updating, the offline servers are updated and when they are done, a switch is turned (HAProxy for example) to replace the active servers.
You could even have all servers available always, but for updating, put some offline while they are updated.
As others, and you as well are pointing, I would never use plone.reload or similar development tools in production.
Yes there is a way, allthough I'd never do that in production it's a great time-saver when developing, to do a reload within a browser-view:
from plone.reload.code import reload_code
from Products.Five.browser import BrowserView
class View(BrowserView):
def __call__(self):
reload_code()
return 'Code loaded.'
Then call the view with the name you registered it with upon the site. This even works in non-debug-mode while the instance is running in background. Tested with a standalone instance (non-ZEO).

Slack API: Retrieve all member emails from a slack channel

Given the name of a slack channel, is there a way to retrieve a list of emails of all the members in that channel? I tried looking in the slack api docs but couldn't find the method I need to make this happen (https://api.slack.com/methods).
Provided you have the necessary scopes you can retrieved the emails of all members of a channel starting with the channel name as follows:
Call channels.list to get the list of all channels and to convert the channel name to its ID
Call channels.info of the desired channel with channel ID to get the list of its members.
Call users.list to retrieve the list of all Slack users including their profile information and email
Match the channel member list with the user list by user ID to get the correct users and emails
Note that this also works for private channels using groups.list and groups.info, but only if the user or bot related to the access token is a member of that private channel.
Update 2019
Would strongly recommend to rather use the newer conversations.* methods, instead of channels.* and groups.*, because they are more flexible and they are some cases where the older methods will not work (e.g. converted channels).
Here's a version that works with Python 2 or 3 using up-to-date APIs.
import os
import requests
SLACK_API_TOKEN='xoxb-TOKENID' # Your token here
CHANNEL_NAME='general' # Your channel here
channel_list = requests.get('https://slack.com/api/conversations.list?token=%s&types=%s' % (SLACK_API_TOKEN, 'public_channel,private_channel,im,mpim')).json()['channels']
for c in channel_list:
if 'name' in c and c['name'] == CHANNEL_NAME:
channel = c
members = requests.get('https://slack.com/api/conversations.members?token=%s&channel=%s' % (SLACK_API_TOKEN, channel['id'])).json()['members']
users_list = requests.get('https://slack.com/api/users.list?token=%s' % SLACK_API_TOKEN).json()['members']
for user in users_list:
if "email" in user['profile'] and user['id'] in members:
print(user['profile']['email'])
Note that you'll need to create a Slack App with an OAuth API token and the following scopes authorized for this to work for all of the various types of conversations:
channels:read
groups:read
im:read
mpim:read
users:read
users:read.email
Also, to read from private channels or chats, you'll need to add your app to the Workspace and "/invite appname" for each channel you're interested in.
Note: channels.list, channels.info, users.list are deprecated (retire and cease functioning on November 25, 2020).
Replace to conversations.list, conversations.members, users.info
You can get the email like this way:
conversations.list - Get the list of Channel Id (public or private)
conversations.members - Get the list of Member Id by Channel Id
users.info - Get the Email by Member Id
Here's the python code:
import requests
SLACK_API_TOKEN = "" # get one from https://api.slack.com/docs/oauth-test-tokens
CHANNEL_NAME = ""
# channel_list = requests.get('https://slack.com/api/channels.list?token=%s' % SLACK_API_TOKEN).json()['channels']
# channel = filter(lambda c: c['name'] == CHANNEL_NAME, channel_list)[0]
# channel_info = requests.get('https://slack.com/api/channels.info?token=%s&channel=%s' % (SLACK_API_TOKEN, channel['id'])).json()['channel']
# members = channel_info['members']
channel_list = requests.get('https://slack.com/api/groups.list?token=%s' % SLACK_API_TOKEN).json()['groups']
channel = filter(lambda c: c['name'] == CHANNEL_NAME, channel_list)[0]
channel_info = requests.get('https://slack.com/api/groups.info?token=%s&channel=%s' % (SLACK_API_TOKEN, channel['id'])).json()['group']
print channel_info
members = channel_info['members']
users_list = requests.get('https://slack.com/api/users.list?token=%s' % SLACK_API_TOKEN).json()['members']
users = filter(lambda u: u['id'] in members, users_list)
for user in users:
first_name, last_name = '', ''
if user['real_name']:
first_name = user['real_name']
if ' ' in user['real_name']:
first_name, last_name = user['real_name'].split()
# print "%s,%s,%s" % (first_name, last_name, user['profile']['email'])
print "%s" % (user['profile']['email'])
I just made a small Ruby script, what retrieves all members from a slack channel and returns it in CSV format.
Script: https://github.com/olivernadj/toolbox/tree/master/slack-members
Example:
$ ./membersof.rb -t xoxp-123456789A-BCDEF01234-56789ABCDE-F012345678 -g QWERTYUIO
first_name,last_name,email
John,Doe,john.doe#example.com
Jane,Doe,jane.doe#example.com
Based on the answer by #Lam, I modified it to work with python3.
import requests
SLACK_API_TOKEN = "" # get one from https://api.slack.com/docs/oauth-test-tokens
CHANNEL_NAME = ""
# channel_list = requests.get('https://slack.com/api/channels.list?token=%s' % SLACK_API_TOKEN).json()['channels']
# channel = filter(lambda c: c['name'] == CHANNEL_NAME, channel_list)[0]
# channel_info = requests.get('https://slack.com/api/channels.info?token=%s&channel=%s' % (SLACK_API_TOKEN, channel['id'])).json()['channel']
# members = channel_info['members']
channel_list = requests.get('https://slack.com/api/groups.list?token=%s' % SLACK_API_TOKEN).json()['groups']
for c in channel_list:
if c['name'] == CHANNEL_NAME:
channel = c
channel_info = requests.get('https://slack.com/api/groups.info?token=%s&channel=%s' % (SLACK_API_TOKEN, channel['id'])).json()['group']
print(channel_info)
members = channel_info['members']
users_list = requests.get('https://slack.com/api/users.list?token=%s' % SLACK_API_TOKEN).json()['members']
for user in users_list:
if "email" in user['profile']:
print(user['profile']['email'])
Ruby solution using slack-ruby-client:
Scopes:
channels:read
users.profile:read
users:read.email
users:read
require 'slack-ruby-client'
Slack.configure do |config|
config.token = ENV['SLACK_TOKEN_IN_BASH_PROFILE']
end
client = Slack::Web::Client.new
CH = '#channel-name'
client.conversations_members(channel: CH).members.each do |user|
puts client.users_profile_get(user: user).profile.email
end
I'm not sure if these are all outdated but I couldn't get any of them to work. The best way I found to do it was to use the client.conversations_members method to find all user IDs and then get emails for those users.
import slack
def get_channel_emails(channel_id:str)-> list:
client = slack.WebClient(token=os.getenv("SLACK_TOKEN"))
result = client.conversations_members(channel= channel_id)
emails = []
for user in result['members']:
info = client.users_info(user = user).data
if 'email' in info['user']['profile'].keys():
emails.append(info['user']['profile']['email'])
return emails
Some notable roadblocks are:
The slack package is actually slackclient so use pip install slackclient instead
The channel_id is not the channel name but the code slack gives to the channel. It can be found in the web browser version path and is formatted CXXXXXXXXXX.
If without coding you require to get emails of all users from Slack channel:
Go to Channel settings, there is a option for "Copy member email address".
With Slack API:
conversations.list - Get the list of Channel Id (public or private)
conversations.members - Get the list of Member Id by Channel Id
users.info - Get the Email by Member Id
with python3 and package 'slackclient'
HERE
pip3 install slackclient
def get_channel_emails(channel_id: str):
slack_api_bot_token = 'YOUR_BOT_TOKEN'
## Require BOT permission ##
# channels:read
# groups:read
# im:read
# mpim:read
# users:read
client = slack.WebClient(token=slack_api_bot_token)
result = client.conversations_members(channel=channel_id)
i = 0
for user in result['members']:
#print(user)
info = client.users_info(user=user).data
i = i + 1
#print(info)
member_id = info['user']['id']
team_id = info['user']['team_id']
display_name = info['user']['name']
real_name = info['user']['real_name']
phone = info['user']['profile']['phone']
email = info['user']['profile']['email']
if not member_id:
member_id = 'null'
elif not team_id:
team_id = 'null'
elif not display_name:
display_name = 'null'
elif not real_name:
real_name = 'null'
elif not phone:
phone = 'null'
elif not email:
email = 'null'
print(f'{i},{real_name},{display_name},{team_id},{member_id},{email},{phone}')
def main():
#channel id: https://app.slack.com/huddle/TB37ZG064/CB3CF4A7B
#if end of URL string starts with "C", it means CHANNEL
get_channel_emails('CB3CF4A7B')

Cannot convert gsm to unicode

Dears
I am using kannel 1.5.0 gateway with smpp on RHEL6 and when I receive an sms I get these errors:
2016-01-28 13:28:07 [8613] [6] WARNING: Could not convert GSM (0xd4) to Unicode.
2016-01-28 13:28:07 [8613] [6] WARNING: Could not convert GSM (0xf2) to Unicode.
.....
and I receive the messages incorrectly to my application, here is the request captured:
http://127.0.0.1:9091/services/smsReceive?msisdn=%2B353872849216&coding=0&smsText=%C3%85%3CH%C3%B9a%C3%91%C3%B9%25evM%C3%B9)zX%C3%ACp&DCS=-1&charset=UTF-8'
and this is my kannel configuration:
group = core
admin-port = 13001
smsbox-port = 13002
admin-password = bar
log-file = "/home/user/logs/kannellogs/SmscGateway.log"
log-level = 0
box-deny-ip = "*.*.*.*"
box-allow-ip = "127.0.0.1;172.*.*.*;192.*.*.*;10.*.*.*"
admin-allow-ip = "127.0.0.1;172.*.*.*;192.*.*.*;10.*.*.*"
admin-deny-ip = "*.*.*.*"
access-log = "/home/user/logs/kannellogs/access.log"
# SMSBOX SETUP
group = smsbox
bearerbox-host = localhost
sendsms-port = 13013
log-file="/home/user/logs/kannellogs/smsbox.log"
log-level = 0
access-log="/home/user/logs/kannellogs/sms_access.log"
reply-couldnotfetch = "Service is down, please try again later.(notfetch)"
reply-couldnotrepresent = "Service is down, please try again later.(notrepresent)"
reply-requestfailed = "Service is down, please try again later.(failed)"
reply-emptymessage = ""
mo-recode = true
# SEND-SMS USERS
group = sendsms-user
username=test
password=test
user-allow-ip = "*.*.*.*"
concatenation = true
split-chars = "#!^&*("
max-messages = 10
# SMPP PARAMETERS for SMSC account
group = smsc
smsc = smpp
smsc-id =Smsc12345
smsc-username = Voda
smsc-password = 12345678
host = 123.222.111.11
port = 1040
system-type = Vodafone403
interface-version = 34
source-addr-autodetect = false
source-addr-ton = 0
source-addr-npi = 1
dest-addr-ton = 1
dest-addr-npi = 1
reconnect-delay = false
reconnect-delay = 10
transceiver-mode = true
throughput = 10
address-range = "^12345$"
max-pending-submits = 3
group = sms-service
accepted-smsc = "Smsc12345"
keyword = default
get-url = "http://127.0.0.1:9091/services/smsReceive?msisdn=%p&coding=%c&smsText=%a&DCS=%m&charset=%C"
catch-all=true
max-messages = 0
I am new to kannel please help if i am doing anything wrong
You should check Kannel docs;
for a "normal" message, it will be "GSM" (coding=0), "binary" (coding=1) or "UTF-16BE" (coding=2)
What I see in url is that
&coding=0
what should be:
&coding=2
and also take care that it is url encoded correctly and about length of unicode message (if you are using aggregators not all support concatenation and long messages)
Hope it helps.
Vedran

Cygnus 0.8.2 doesn't work

I have installed Cygnus 0.8.2 in a VM with CentOS-6.5-x64.
In the config file agent.conf I only change the following:
cygnusagent.sinks.hdfs-sink.oauth2_token = xxxxxx
cygnusagent.sinks.hdfs-sink.hdfs_username = myuser
I run cygnus with this command:
/usr/cygnus/bin/cygnus-flume-ng agent --conf /usr/cygnus/conf/ -f /usr/cygnus/conf/agent.conf -n cygnusagent -Dflume.root.logger=INFO,console
But when I send a xml message occurs this error:
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.)
2015-07-23 14:46:02,133 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - com.telefonica.iot.cygnus.sinks.OrionSink.process(OrionSink.java:163)] The event TTL has expired, it is no more re-injected in the channel (id=617320308, ttl=0)
2015-07-23 14:46:02,133 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - com.telefonica.iot.cygnus.sinks.OrionSink.process(OrionSink.java:193)] Finishing transaction (1437651766-108-0000000000)
2015-07-23 14:46:02,637 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - com.telefonica.iot.cygnus.sinks.OrionSink.process(OrionSink.java:128)] Event got from the channel (id=617320308, headers={timestamp=1437651946136, content-type=application/xml, transactionId=1437651766-108-0000000000, fiware-service=def_serv, fiware-servicepath=def_servpath, ttl=0, destination=sensorreading1_sensorreading}, bodyLength=891)
2015-07-23 14:46:02,643 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - com.telefonica.iot.cygnus.sinks.OrionHDFSSink.persist(OrionHDFSSink.java:356)] [hdfs-sink] Persisting data at OrionHDFSSink. HDFS file (def_serv/def_servpath/sensorreading1_sensorreading/sensorreading1_sensorreading.txt), Data ({"recvTime":"2015-07-23T11:45:46.136Z","nodeid":"1", "nodeid_md":[],"sensorid":"1", "sensorid_md":[],"systemid":"1", "systemid_md":[],"value":"-990.6", "value_md":[]})
2015-07-23 14:46:02,644 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - com.telefonica.iot.cygnus.sinks.OrionSink.process(OrionSink.java:143)] Persistence error (The /user/cristina.albaladejo/def_serv/def_servpath/sensorreading1_sensorreading directory could not be created in HDFS. HttpFS response: 503 Service unavailable)
2015-07-23 14:46:02,644 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - com.telefonica.iot.cygnus.sinks.OrionSink.process(OrionSink.java:163)] The event TTL has expired, it is no more re-injected in the channel (id=617320308, ttl=0)
2015-07-23 14:46:02,644 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - com.telefonica.iot.cygnus.sinks.OrionSink.process(OrionSink.java:193)] Finishing transaction (1437651766-108-0000000000)
Then, data are not stored in HDFS... How I can solve this?
The complete config is:
cygnusagent.sources = http-source
cygnusagent.sinks = hdfs-sink mysql-sink ckan-sink
cygnusagent.channels = hdfs-channel mysql-channel ckan-channel
cygnusagent.sources.http-source.channels = hdfs-channel mysql-channel ckan-channel
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
cygnusagent.sources.http-source.port = 5050
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
cygnusagent.sources.http-source.handler.notification_target = /notify
cygnusagent.sources.http-source.handler.default_service = def_serv
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
cygnusagent.sources.http-source.handler.events_ttl = 10
cygnusagent.sources.http-source.interceptors = ts gi
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
# OrionHDFSSink configuration
cygnusagent.sinks.hdfs-sink.channel = hdfs-channel
cygnusagent.sinks.hdfs-sink.type = com.telefonica.iot.cygnus.sinks.OrionHDFSSink
cygnusagent.sinks.hdfs-sink.hdfs_host = x1.y1.z1.w1,x2.y2.z2.w2
cygnusagent.sinks.hdfs-sink.hdfs_port = 14000
cygnusagent.sinks.hdfs-sink.hdfs_username = cristina.albaladejo
cygnusagent.sinks.hdfs-sink.oauth2_token = MYTOKEN
cygnusagent.sinks.hdfs-sink.attr_persistence = column
cygnusagent.sinks.hdfs-sink.hive_host = x.y.z.w
cygnusagent.sinks.hdfs-sink.hive_port = 10000
cygnusagent.sinks.hdfs-sink.krb5_auth = false
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_user = krb5_username
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_password = xxxxxxxxxxxxx
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_login_conf_file = /usr/cygnus/conf/krb5_login.conf
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_conf_file = /usr/cygnus/conf/krb5.conf
cygnusagent.channels.hdfs-channel.type = memory
cygnusagent.channels.hdfs-channel.capacity = 1000
cygnusagent.channels.hdfs-channel.transactionCapacity = 100
It seems you have not configured the following properties (putting the required values):
cygnusagent.sinks.hdfs-sink.hdfs_host = cosmos.lab.fiware.org
cygnusagent.sinks.hdfs-sink.hive_host = cosmos.lab.fiware.org
In addition, since you are only configuring the HDFS persistence, avoid any reference to mysql-sink and ckan-sink, i.e.:
cygnusagent.sinks = hdfs-sink
cygnusagent.channels = hdfs-channel
cygnusagent.sources.http-source.channels = hdfs-channel
Try with the proposed changes and let me know if it works.