magento2 cron not reindex - magento2

I have created a crontab with this command in my Ubuntu Server and Plesk 12.5 :
MAILTO=""
SHELL="/bin/bash"
*/1 * * * * php -c -f /var/www/vhosts/system/domainname.com/etc/php.ini /var/www/vhosts/domainname.com/httpdocs/store/bin/magento cron:run > /var/www/vhosts/domainname.com/httpdocs/store/var/log/magento.cron.log&
MAILTO=""
SHELL="/bin/bash"
*/1 * * * * php -c -f /var/www/vhosts/system/domainname.com/etc/php.ini /var/www/vhosts/domainname.com/httpdocs/store/update/cron.php > /var/www/vhosts/domainname.com/httpdocs/store/var/log/update.cron.log&
MAILTO=""
SHELL="/bin/bash"
*/1 * * * * php -c -f /var/www/vhosts/system/domainname.com/etc/php.ini /var/www/vhosts/domainname.com/httpdocs/store/bin/magento setup:cron:run > /var/www/vhosts/domainname.com/httpdocs/store/var/log/setup.cron.log&
When run it creates three files (magento.cron.log; update.cron.log; setup.cron.log). And three files cotains the same text:
; ATTENTION! ; ; DO NOT MODIFY THIS FILE BECAUSE IT WAS GENERATED
AUTOMATICALLY, ; SO ALL YOUR CHANGES WILL BE LOST THE NEXT TIME THE
FILE IS GENERATED.
[PHP] soap.wsdl_cache_limit = 5 cli_server.color = On
mysql.allow_persistent = On mysqli.max_persistent = -1
mysql.connect_timeout = 60 session.use_only_cookies = 1
register_argc_argv = Off mssql.min_error_severity = 10 open_basedir =
"/var/www/vhosts/mydomainname.com/:/tmp/" session.name = PHPSESSID
mysqlnd.collect_statistics = On session.hash_function = 0
session.gc_probability = 0 log_errors_max_len = 1024
mssql.secure_connection = Off pgsql.max_links = -1 variables_order =
"GPCS" ldap.max_links = -1 sybct.allow_persistent = On max_input_time
= 60 odbc.max_links = -1 session.save_handler = files session.save_path = "/var/lib/php5" mysqli.cache_size = 2000
pgsql.auto_reset_persistent = Off error_reporting = E_ALL &
~E_DEPRECATED & ~E_STRICT auto_prepend_file =
sybct.min_client_severity = 10 pgsql.max_persistent = -1
auto_globals_jit = On soap.wsdl_cache_ttl = 86400 allow_url_fopen = On
zend.enable_gc = On mysqli.allow_persistent = On tidy.clean_output =
Off display_startup_errors = Off user_dir = session.cookie_lifetime =
0 mysqli.max_links = -1 default_socket_timeout = 60
session.serialize_handler = php session.hash_bits_per_character = 5
unserialize_callback_func = pdo_mysql.cache_size = 2000
default_mimetype = "text/html" session.cache_expire = 180
max_execution_time = 30 mail.add_x_header = On upload_max_filesize =
2M ibase.max_links = -1 zlib.output_compression = Off
ignore_repeated_errors = Off odbc.max_persistent = -1 file_uploads =
On ibase.max_persistent = -1 mysqli.reconnect = Off
mssql.allow_persistent = On mysql.max_persistent = -1 mssql.max_links
= -1 session.use_trans_sid = 0 mysql.default_socket = always_populate_raw_post_data = -1 mysql.max_links = -1
odbc.defaultbinmode = 1 sybct.max_persistent = -1 output_buffering =
4096 ibase.timeformat = "%H:%M:%S" doc_root = log_errors = On
mysql.default_host = default_charset = "UTF-8" request_order = "GP"
display_errors = Off mysqli.default_socket = mysqli.default_pw =
html_errors = On mssql.compatibility_mode = Off ibase.allow_persistent
= 1 sybct.min_server_severity = 10 mysql.allow_local_infile = On post_max_size = 8M asp_tags = Off memory_limit = 512M short_open_tag =
Off SMTP = localhost precision = 14 session.use_strict_mode = 0
session.gc_maxlifetime = 1440 allow_url_include = Off
mysqli.default_host = mysqli.default_user = session.referer_check =
pgsql.log_notice = 0 mysql.default_port = pgsql.ignore_notice = 0
mysql.trace_mode = Off ibase.timestampformat = "%Y-%m-%d %H:%M:%S"
engine = On odbc.allow_persistent = On ibase.dateformat = "%Y-%m-%d"
track_errors = Off max_file_uploads = 20 pgsql.allow_persistent = On
session.auto_start = 0 auto_append_file = disable_classes =
pdo_mysql.default_socket = mysql.default_password =
url_rewriter.tags =
"a=href,area=href,frame=src,input=src,form=fakeentry" smtp_port = 25
sql.safe_mode = Off session.cookie_path = / expose_php = On
report_memleaks = On session.gc_divisor = 1000 mssql.max_persistent =
-1 serialize_precision = 17 odbc.check_persistent = On sybct.max_links = -1 mysqlnd.collect_memory_statistics = Off session.cookie_domain = session.cookie_httponly = session.cache_limiter = nocache enable_dl =
Off mysqli.default_port = 3306 disable_functions = odbc.defaultlrl =
4096 soap.wsdl_cache_enabled = 1 soap.wsdl_cache_dir = "/tmp"
mssql.min_message_severity = 10 session.use_cookies = 1
mysql.default_user = mysql.cache_size = 2000 implicit_flush = Off
ignore_repeated_source = Off bcmath.scale = 0
But when I enter magenta manager keeps giving the message "One or more indexers are invalid. Make sure your Magento cron job is running."
I do not understand. What Is It that is not working?
Thanks

You have gotten the flags for php wrong. It should be
*/1 * * * * php -c /var/www/vhosts/system/domainname.com/etc/php.ini -f /var/www/vhosts/domainname.com/httpdocs/store/bin/magento cron:run > /var/www/vhosts/domainname.com/httpdocs/store/var/log/magento.cron.log&
Also provide full path to php, which can be figured out with which php command.

Related

Condition only run ONCE (instead on all bars)

On checking for condition gap(high-low) > 0.1%(which is met multiple times), the label only gets
rendered ONCE (instead of on relevant bars within 25 bar lookback).
Plz provide a solution.
CODE :
Historical Bars
//#version=5
indicator("PriceMomemtum",overlay = true,max_bars_back = 25)
gap = (math.abs(high - low)/low ) * 100
//var gap = (math.abs(high - low)/low ) * 100
if gap > 0.1
var lbl = label.new(x = bar_index,y = na , text = na ,text_font_family = font.family_default ,xloc = xloc.bar_index,yloc =yloc.abovebar,style = label.style_arrowdown ,textcolor = color.white,size =size.small,textalign = text.align_left,tooltip = na)
label.set_text(lbl,str.tostring(gap,"#.00")+"%")
label.set_xy(lbl,bar_index,high )
Realtime Bars
//#version=5
indicator("PriceMomemtum",overlay = true,max_bars_back = 25)
if barstate.isrealtime
gap = (math.abs(high - low)/low ) * 100
//var gap = (math.abs(high - low)/low ) * 100
if gap > 0.1
var lbl = label.new(x = bar_index,y = na , text = na ,text_font_family = font.family_default ,xloc = xloc.bar_index,yloc =yloc.abovebar,style = label.style_arrowdown ,textcolor = color.white,size =size.small,textalign = text.align_left,tooltip = na)
label.set_text(lbl,str.tostring(gap,"#.00")+"%")
label.set_xy(lbl,bar_index,high )
alert(str.tostring(time(syminfo.timezone)) + "(PriceMomentum)", alert.freq_once_per_bar)
Have you tried defining "lbl" variable without "var"?
result

Compare two symbolic expressions

I have two symbolic expressions a and b, each consists of polynomials with basic arithmetic and small, positive, integer powers.
simplify(a - b) doesn't go up to 0, and my only alternative is to subs some random numbers into the variables and compare.
I would have expected something like expanding the expressions until there are no parentheses. Then, add all fractions into a single fraction.
I converted the data into a function which can be called as:
x = sym('x', [1 8], 'real')';
err = func( x ) % should be simplified to zeros
x0 = rand( size(x) )
double( subs(err, x, x0) )
simplify(err)
The function
function err_Dpsi_Dpsi2 = func(in1)
%FUNC
% ERR_DPSI_DPSI2 = FUNC(IN1)
% This function was generated by the Symbolic Math Toolbox version 8.4.
% 29-Dec-2020 20:03:34
x1 = in1(1,:);
x2 = in1(2,:);
x3 = in1(3,:);
x4 = in1(4,:);
x5 = in1(5,:);
x6 = in1(6,:);
x7 = in1(7,:);
x8 = in1(8,:);
t2 = x1.*x6;
t3 = x2.*x5;
t4 = x1.*x7;
t5 = x3.*x5;
t6 = x2.*x7;
t7 = x3.*x6;
t8 = -x2;
t9 = -x3;
t10 = -x6;
t11 = -x7;
t15 = x1./2.0;
t16 = x2./2.0;
t17 = x1./4.0;
t18 = x3./2.0;
t19 = x2./4.0;
t20 = x3./4.0;
t21 = x5./2.0;
t22 = x6./2.0;
t23 = x5./4.0;
t24 = x7./2.0;
t25 = x6./4.0;
t26 = x7./4.0;
t43 = x2.*7.072e+3;
t44 = x3.*7.072e+3;
t45 = x4.*7.071e+3;
t46 = x6.*7.072e+3;
t47 = x7.*7.072e+3;
t48 = x8.*7.071e+3;
t60 = x2.*x8.*-7.071e+3;
t62 = x4.*x7.*-7.071e+3;
t69 = x1.*9.999907193999999e-1;
t70 = x5.*9.999907193999999e-1;
t71 = x1.*1.0000660704;
t72 = x5.*1.0000660704;
t74 = x2.*1.0001321408;
t75 = x3.*1.0001321408;
t76 = x6.*1.0001321408;
t77 = x7.*1.0001321408;
t78 = x1.*1.0000660704;
t79 = x2.*5.000660704e-1;
t80 = x2.*1.0001321408;
t81 = x3.*5.000660704e-1;
t82 = x3.*1.0001321408;
t83 = x5.*1.0000660704;
t84 = x6.*5.000660704e-1;
t85 = x6.*1.0001321408;
t86 = x7.*5.000660704e-1;
t87 = x7.*1.0001321408;
t102 = x1.*9.999907194000001e-1;
t103 = x5.*9.999907194000001e-1;
t104 = x4.*4.999953597e-1;
t105 = x8.*4.999953597e-1;
t108 = x2.*1.000132149530596;
t109 = x3.*1.000132149530596;
t110 = x6.*1.000132149530596;
t111 = x7.*1.000132149530596;
t112 = x2.*1.000056789186827;
t113 = x3.*1.000056789186827;
t114 = x6.*1.000056789186827;
t115 = x7.*1.000056789186827;
t124 = x4.*1.000056789186827;
t125 = x8.*1.000056789186827;
t126 = x4.*9.999814388861295e-1;
t127 = x8.*9.999814388861295e-1;
t128 = x2.*1.000132149530596;
t129 = x3.*1.000132149530596;
t130 = x6.*1.000132149530596;
t131 = x7.*1.000132149530596;
t139 = x4.*2.500307147434136e-1;
t140 = x8.*2.500307147434136e-1;
t141 = x2.*1.000056789186827;
t142 = x3.*1.000056789186827;
t144 = x4.*1.000056789186827;
t145 = x6.*1.000056789186827;
t146 = x7.*1.000056789186827;
t148 = x8.*1.000056789186827;
t157 = x2.*x8.*(-2.500307147434136e-1);
t158 = x4.*x7.*(-2.500307147434136e-1);
t159 = x4.*9.999814388861297e-1;
t160 = x8.*9.999814388861297e-1;
t12 = -t3;
t13 = -t4;
t14 = -t7;
t27 = t2./4.0;
t28 = t3./4.0;
t29 = t4./4.0;
t30 = t5./4.0;
t31 = t6./4.0;
t32 = t7./4.0;
t33 = t8+x1;
t34 = t9+x1;
t35 = t10+x5;
t36 = t11+x5;
t37 = -t16;
t38 = -t18;
t39 = -t20;
t40 = -t22;
t41 = -t24;
t42 = -t26;
t52 = t6.*7.072e+3;
t53 = t48.*x2;
t54 = t7.*7.072e+3;
t55 = t45.*x6;
t56 = t48.*x3;
t57 = t45.*x7;
t58 = -t45;
t59 = -t48;
t88 = -t74;
t89 = -t75;
t90 = -t76;
t91 = -t77;
t92 = -t80;
t93 = -t79;
t94 = -t82;
t95 = -t81;
t96 = -t85;
t97 = -t84;
t98 = -t87;
t99 = -t86;
t116 = -t108;
t117 = -t109;
t118 = -t110;
t119 = -t111;
t120 = -t112;
t121 = -t113;
t122 = -t114;
t123 = -t115;
t132 = -t128;
t133 = -t129;
t134 = -t130;
t135 = -t131;
t136 = t6.*2.500660747652978e-1;
t137 = t7.*2.500660747652978e-1;
t143 = -t139;
t147 = -t140;
t149 = t140.*x2;
t150 = t139.*x6;
t151 = t140.*x3;
t152 = t139.*x7;
t153 = -t141;
t154 = -t142;
t155 = -t145;
t156 = -t146;
t49 = -t28;
t50 = -t29;
t51 = -t32;
t61 = -t54;
t63 = t43+t58;
t64 = t44+t58;
t65 = t46+t59;
t66 = t47+t59;
t67 = t2+t5+t6+t12+t13+t14;
t138 = -t137;
t161 = t15+t38+t93+t104;
t162 = t15+t37+t95+t104;
t163 = t21+t41+t97+t105;
t164 = t21+t40+t99+t105;
t169 = t71+t89+t116+t124;
t170 = t71+t88+t117+t124;
t171 = t72+t91+t118+t125;
t172 = t72+t90+t119+t125;
t173 = t78+t92+t133+t144;
t174 = t78+t94+t132+t144;
t175 = t83+t96+t135+t148;
t176 = t83+t98+t134+t148;
t177 = t69+t120+t121+t126;
t178 = t70+t122+t123+t127;
t179 = t102+t153+t154+t159;
t180 = t103+t155+t156+t160;
t68 = 1.0./t67;
t73 = t27+t30+t31+t49+t50+t51;
t106 = t52+t55+t56+t60+t61+t62;
t165 = t161.^2;
t166 = t162.^2;
t167 = t163.^2;
t168 = t164.^2;
t182 = t136+t138+t150+t151+t157+t158;
t100 = 1.0./t73;
t107 = 1.0./t106;
t181 = t165+t166+t167+t168;
t183 = 1.0./t182;
t101 = t100.^2;
t184 = t183.^2;
err_Dpsi_Dpsi2 = [t181.*(t35.*t68.*t100-t36.*t68.*t100)+t101.*t181.*(t25+t42),-t100.*t169+t100.*t174+t169.*t183-t174.*t183-t101.*t181.*(t23+t42)+t181.*t184.*(t140-x7.*2.500660747652978e-1)+t36.*t68.*t100.*t181+t66.*t107.*t181.*t183.*1.0,-t100.*t170+t100.*t173+t170.*t183-t173.*t183-t181.*t184.*(t140-x6.*2.500660747652978e-1)+t101.*t181.*(t23-t25)-t35.*t68.*t100.*t181-t65.*t107.*t181.*t183.*1.0,t100.*t177-t100.*t179-t177.*t183+t179.*t183+t181.*(t65.*t107.*t183.*9.998585972850678e-1-t66.*t107.*t183.*9.998585972850678e-1)-t181.*t184.*(x6.*2.500307147434136e-1-x7.*2.500307147434136e-1),-t181.*(t33.*t68.*t100-t34.*t68.*t100)-t101.*t181.*(t19+t39),-t100.*t171+t100.*t176+t171.*t183-t176.*t183+t101.*t181.*(t17+t39)-t181.*t184.*(t139-x3.*2.500660747652978e-1)-t34.*t68.*t100.*t181-t64.*t107.*t181.*t183.*1.0,-t100.*t172+t100.*t175+t172.*t183-t175.*t183+t181.*t184.*(t139-x2.*2.500660747652978e-1)-t101.*t181.*(t17-t19)+t33.*t68.*t100.*t181+t63.*t107.*t181.*t183.*1.0,t100.*t178-t100.*t180-t178.*t183+t180.*t183-t181.*(t63.*t107.*t183.*9.998585972850678e-1-t64.*t107.*t183.*9.998585972850678e-1)+t181.*t184.*(x2.*2.500307147434136e-1-x3.*2.500307147434136e-1)];
I found what I was looking for
num = numden( err ) % convert to rational polynomial, and we care only about the numerator
collect( num ) % cancel terms--not needed, numden does some version of simplify
My example, though, wasn't good. For some reason, there are precision issues. I thought that symbolic used exact arithmetic, but I didn't look into that. However, if I use variables instead of finite precision coefficients, then it outputs zeros.

msyql not connect with sphinxsearch

I am unable to connect with sphinxsearch mysql
mysql -h0 -P3306
ERROR 2003 (HY000): Can't connect to MySQL server on '0' (111)
How can I remove this error
this is my config file code sphinx.conf
is we need to start any service?
source src1
{
type = mysql
sql_host = localhost
sql_user = root
sql_pass = india#123
sql_db = test
sql_port = 3306
sql_query = \
SELECT id, group_id, UNIX_TIMESTAMP(date_added) AS date_added, title, content \
FROM documents
sql_attr_uint = group_id
sql_attr_timestamp = date_added
sql_ranged_throttle = 0
sql_query_info = SELECT * FROM documents WHERE id=$id
}
source src1throttled : src1
{
sql_ranged_throttle = 100
}
index test1
{
source = src1
path = /var/lib/sphinxsearch/data/test1
docinfo = extern
dict = keywords
mlock = 0
morphology = none
min_word_len = 1
html_strip = 0
}
index test1stemmed : test1
{
path = /var/lib/sphinxsearch/data/test1stemmed
morphology = stem_en
}
index dist1
{
type = distributed
local = test1
local = test1stemmed
agent = localhost:9313:remote1
agent = localhost:9314:remote2,remote3
agent_connect_timeout = 1000
agent_query_timeout = 3000
}
index rt
{
type = rt
path = /var/lib/sphinxsearch/data/rt
rt_field = title
rt_field = content
rt_attr_uint = gid
}
indexer
{
mem_limit = 128M
}
searchd
{
listen = 9312
listen = 9306:mysql41
log = /var/log/sphinxsearch/searchd.log
query_log = /var/log/sphinxsearch/query.log
read_timeout = 5
client_timeout = 300
max_children = 30
persistent_connections_limit = 30
pid_file = /var/run/sphinxsearch/searchd.pid
seamless_rotate = 1
preopen_indexes = 1
unlink_old = 1
mva_updates_pool = 1M
max_packet_size = 8M
max_filters = 256
max_filter_values = 4096
max_batch_queries = 32
workers = threads # for RT to work
}
common
{
}
mysql -h0 -P3306
There you are trying to connect to port 3306. Why?
You seem to have searchd listening on port 9306
listen = 9306:mysql41
... and yes, you will need searchd actually running. The 'service' may be called different things in differnet distributions.
Make sure the sphinx user has permission to this folder!
chown -R sphinx:sphinx /var/lib/sphinxsearch/

Problems when extracting data with Flume/Coordinator - HUE

i'm new to the Hadoop world and i'm having some trouble with my final data.
My purpose is to extract data from a facebook page (i'm using restfb API) using flume, then the data goes to HDFS which will be used by HIVE to gerenerate the final data. This happens every hour. All this on HUE.
I don't know why, but sometimes I success in extract data from the hole day. And some days, I can only extract data from a few hours.
This is the data from Flume:
As you can see, on 03/21 I could only extract the first 4h from the day. While on 03/22, I could extract the hole day.
Some more info.
My Flume config. from Cloudera Manager
FacebookAgent.sources = FacebookPageFansCity FacebookPageFansGenderAge FacebookPageFans FacebookPagePosts FacebookPageViews
FacebookAgent.channels = MemoryChannelFacebookPageFansCity MemoryChannelFacebookPageFansGenderAge MemoryChannelFacebookPageFans MemoryChannelFacebookPagePosts MemoryChannelFacebookPageViews
FacebookAgent.sinks = HDFSFacebookPageFansCity HDFSFacebookPageFansGenderAge HDFSFacebookPageFans HDFSFacebookPagePosts HDFSFacebookPageViews
# FacebookPageFansCity
FacebookAgent.sources.FacebookPageFansCity.type = br.com.tsystems.hadoop.flume.source.restfb.FacebookPageFansCitySource
FacebookAgent.sources.FacebookPageFansCity.channels = MemoryChannelFacebookPageFansCity
FacebookAgent.sources.FacebookPageFansCity.appId = null
FacebookAgent.sources.FacebookPageFansCity.appSecret = null
FacebookAgent.sources.FacebookPageFansCity.accessToken = *confidential*
FacebookAgent.sources.FacebookPageFansCity.pageId = *confidential*
FacebookAgent.sources.FacebookPageFansCity.proxyEnabled = false
FacebookAgent.sources.FacebookPageFansCity.proxyHost = null
FacebookAgent.sources.FacebookPageFansCity.proxyPort = -1
FacebookAgent.sources.FacebookPageFansCity.refreshInterval = 3600
FacebookAgent.sinks.HDFSFacebookPageFansCity.channel = MemoryChannelFacebookPageFansCity
FacebookAgent.sinks.HDFSFacebookPageFansCity.type = hdfs
FacebookAgent.sinks.HDFSFacebookPageFansCity.hdfs.path = hdfs://hdoop01:8020/user/flume/pocfacebook/pagefanscity/%Y%m%d%H
FacebookAgent.sinks.HDFSFacebookPageFansCity.hdfs.fileType = DataStream
FacebookAgent.sinks.HDFSFacebookPageFansCity.hdfs.writeFormat = Text
FacebookAgent.sinks.HDFSFacebookPageFansCity.hdfs.batchSize = 1000
FacebookAgent.sinks.HDFSFacebookPageFansCity.hdfs.rollSize = 0
FacebookAgent.sinks.HDFSFacebookPageFansCity.hdfs.rollCount = 10000
FacebookAgent.channels.MemoryChannelFacebookPageFansCity.type = memory
FacebookAgent.channels.MemoryChannelFacebookPageFansCity.capacity = 10000
FacebookAgent.channels.MemoryChannelFacebookPageFansCity.transactionCapacity = 1000
# FacebookPageFansGenderAge
FacebookAgent.sources.FacebookPageFansGenderAge.type = br.com.tsystems.hadoop.flume.source.restfb.FacebookPageFansGenderAgeSource
FacebookAgent.sources.FacebookPageFansGenderAge.channels = MemoryChannelFacebookPageFansGenderAge
FacebookAgent.sources.FacebookPageFansGenderAge.appId = null
FacebookAgent.sources.FacebookPageFansGenderAge.appSecret = null
FacebookAgent.sources.FacebookPageFansGenderAge.accessToken = *confidential*
FacebookAgent.sources.FacebookPageFansGenderAge.pageId = *confidential*
FacebookAgent.sources.FacebookPageFansGenderAge.proxyEnabled = false
FacebookAgent.sources.FacebookPageFansGenderAge.proxyHost = null
FacebookAgent.sources.FacebookPageFansGenderAge.proxyPort = -1
FacebookAgent.sources.FacebookPageFansGenderAge.refreshInterval = 3600
FacebookAgent.sinks.HDFSFacebookPageFansGenderAge.channel = MemoryChannelFacebookPageFansGenderAge
FacebookAgent.sinks.HDFSFacebookPageFansGenderAge.type = hdfs
FacebookAgent.sinks.HDFSFacebookPageFansGenderAge.hdfs.path = hdfs://hdoop01:8020/user/flume/pocfacebook/pagefansgenderage/%Y%m%d%H
FacebookAgent.sinks.HDFSFacebookPageFansGenderAge.hdfs.fileType = DataStream
FacebookAgent.sinks.HDFSFacebookPageFansGenderAge.hdfs.writeFormat = Text
FacebookAgent.sinks.HDFSFacebookPageFansGenderAge.hdfs.batchSize = 1000
FacebookAgent.sinks.HDFSFacebookPageFansGenderAge.hdfs.rollSize = 0
FacebookAgent.sinks.HDFSFacebookPageFansGenderAge.hdfs.rollCount = 10000
FacebookAgent.channels.MemoryChannelFacebookPageFansGenderAge.type = memory
FacebookAgent.channels.MemoryChannelFacebookPageFansGenderAge.capacity = 10000
FacebookAgent.channels.MemoryChannelFacebookPageFansGenderAge.transactionCapacity = 1000
# FacebookPageFans
FacebookAgent.sources.FacebookPageFans.type = br.com.tsystems.hadoop.flume.source.restfb.FacebookPageFansSource
FacebookAgent.sources.FacebookPageFans.channels = MemoryChannelFacebookPageFans
FacebookAgent.sources.FacebookPageFans.appId = null
FacebookAgent.sources.FacebookPageFans.appSecret = null
FacebookAgent.sources.FacebookPageFans.accessToken = *confidential*
FacebookAgent.sources.FacebookPageFans.pageId = *confidential*
FacebookAgent.sources.FacebookPageFans.proxyEnabled = false
FacebookAgent.sources.FacebookPageFans.proxyHost = null
FacebookAgent.sources.FacebookPageFans.proxyPort = -1
FacebookAgent.sources.FacebookPageFans.refreshInterval = 3600
FacebookAgent.sinks.HDFSFacebookPageFans.channel = MemoryChannelFacebookPageFans
FacebookAgent.sinks.HDFSFacebookPageFans.type = hdfs
FacebookAgent.sinks.HDFSFacebookPageFans.hdfs.path = hdfs://hdoop01:8020/user/flume/pocfacebook/pagefans/%Y%m%d%H
FacebookAgent.sinks.HDFSFacebookPageFans.hdfs.fileType = DataStream
FacebookAgent.sinks.HDFSFacebookPageFans.hdfs.writeFormat = Text
FacebookAgent.sinks.HDFSFacebookPageFans.hdfs.batchSize = 1000
FacebookAgent.sinks.HDFSFacebookPageFans.hdfs.rollSize = 0
FacebookAgent.sinks.HDFSFacebookPageFans.hdfs.rollCount = 10000
FacebookAgent.channels.MemoryChannelFacebookPageFans.type = memory
FacebookAgent.channels.MemoryChannelFacebookPageFans.capacity = 10000
FacebookAgent.channels.MemoryChannelFacebookPageFans.transactionCapacity = 1000
# FacebookPagePosts
FacebookAgent.sources.FacebookPagePosts.type = br.com.tsystems.hadoop.flume.source.restfb.FacebookPagePostsSource
FacebookAgent.sources.FacebookPagePosts.channels = MemoryChannelFacebookPagePosts
FacebookAgent.sources.FacebookPagePosts.appId = null
FacebookAgent.sources.FacebookPagePosts.appSecret = null
FacebookAgent.sources.FacebookPagePosts.accessToken = *confidential*
FacebookAgent.sources.FacebookPagePosts.pageId = *confidential*
FacebookAgent.sources.FacebookPagePosts.proxyEnabled = false
FacebookAgent.sources.FacebookPagePosts.proxyHost = null
FacebookAgent.sources.FacebookPagePosts.proxyPort = -1
FacebookAgent.sources.FacebookPagePosts.refreshInterval = 3600
FacebookAgent.sinks.HDFSFacebookPagePosts.channel = MemoryChannelFacebookPagePosts
FacebookAgent.sinks.HDFSFacebookPagePosts.type = hdfs
FacebookAgent.sinks.HDFSFacebookPagePosts.hdfs.path = hdfs://hdoop01:8020/user/flume/pocfacebook/pageposts/%Y%m%d%H
FacebookAgent.sinks.HDFSFacebookPagePosts.hdfs.fileType = DataStream
FacebookAgent.sinks.HDFSFacebookPagePosts.hdfs.writeFormat = Text
FacebookAgent.sinks.HDFSFacebookPagePosts.hdfs.batchSize = 1000
FacebookAgent.sinks.HDFSFacebookPagePosts.hdfs.rollSize = 0
FacebookAgent.sinks.HDFSFacebookPagePosts.hdfs.rollCount = 10000
FacebookAgent.channels.MemoryChannelFacebookPagePosts.type = memory
FacebookAgent.channels.MemoryChannelFacebookPagePosts.capacity = 10000
FacebookAgent.channels.MemoryChannelFacebookPagePosts.transactionCapacity = 5000
# FacebookPageViews
FacebookAgent.sources.FacebookPageViews.type = br.com.tsystems.hadoop.flume.source.restfb.FacebookPageViewsSource
FacebookAgent.sources.FacebookPageViews.channels = MemoryChannelFacebookPageViews
FacebookAgent.sources.FacebookPageViews.appId = null
FacebookAgent.sources.FacebookPageViews.appSecret = null
FacebookAgent.sources.FacebookPageViews.accessToken = *confidential*
FacebookAgent.sources.FacebookPageViews.pageId = *confidential*
FacebookAgent.sources.FacebookPageViews.proxyEnabled = false
FacebookAgent.sources.FacebookPageViews.proxyHost = null
FacebookAgent.sources.FacebookPageViews.proxyPort = -1
FacebookAgent.sources.FacebookPageViews.refreshInterval = 3600
FacebookAgent.sinks.HDFSFacebookPageViews.channel = MemoryChannelFacebookPageViews
FacebookAgent.sinks.HDFSFacebookPageViews.type = hdfs
FacebookAgent.sinks.HDFSFacebookPageViews.hdfs.path = hdfs://hdoop01:8020/user/flume/pocfacebook/pageviews/%Y%m%d%H
FacebookAgent.sinks.HDFSFacebookPageViews.hdfs.fileType = DataStream
FacebookAgent.sinks.HDFSFacebookPageViews.hdfs.writeFormat = Text
FacebookAgent.sinks.HDFSFacebookPageViews.hdfs.batchSize = 1000
FacebookAgent.sinks.HDFSFacebookPageViews.hdfs.rollSize = 0
FacebookAgent.sinks.HDFSFacebookPageViews.hdfs.rollCount = 10000
FacebookAgent.channels.MemoryChannelFacebookPageViews.type = memory
FacebookAgent.channels.MemoryChannelFacebookPageViews.capacity = 10000
FacebookAgent.channels.MemoryChannelFacebookPageViews.transactionCapacity = 1000
Can anybody help me?
UPDATE
My coordinator from Oozie

circusctl incr <name> [<nbprocess>] does not increment by nbprocess

Here is my circus.ini
[circus]
check_delay = 5
endpoint = tcp://127.0.0.1:5555
pubsub_endpoint = tcp://127.0.0.1:5556
stats_endpoint = tcp://127.0.0.1:5557
httpd = False
debug = False
[watcher:sample1]
cmd = /worker/sample1.php
warmup_delay = 0
numprocesses = 10
[watcher:sample2]
cmd = /worker/sample2.php
warmup_delay = 0
numprocesses = 10
[plugin:flapping]
use = circus.plugins.flapping.Flapping
retry_in = 3
max_retry = 2
I am trying to increase the number of processes by 2(nbprocess) for sample1. I tried
circusctl incr sample1 2
But circus always increases it by 1, not by 2(nbprocess). Any ideas?
Fixed by the author. Here is the reported bug.