How to configure memcache with elastic cache in Drupal settings.php - memcached

I Have configured the memcache with external elastic cache url of amazonin settings.php
$settings['memcache']['servers'] =
['xxx-memcache.xxxx.xxx.xxx.cache.amazonaws.com:11211' => 'default'];
$settings['memcache']['bins'] = ['default' => 'default'];
$settings['memcache']['key_prefix'] = '';
$settings['cache']['default'] = 'cache.backend.memcache';
But i receive the below error
There may be a problem with your Memcache configuration. Please review README.txt and visit the Drupal admin page for more information.
Can any one help to fix this issue. Can any let me know where i am wrong

Try this
# MemCache Configuration $conf['cache_backends'][] = './sites/xyz/modules/memcache/memcache.inc';
$conf['cache_default_class'] = 'MemCacheDrupal';
$conf['cache_class_cache_form'] = 'DrupalDatabaseCache';
$conf['memcache_key_prefix'] = 'xyzlive';
or https://www.drupal.org/project/memcache

Related

what is the correct configuration of mod_ping on ejabberd-18.12.1?

I am using ejabberd server version 18.12.1 with stream management enabled. When the user disconnects from the internet, its presence remains online so I decided to use mod_ping to kill the connection after a timeout using mod ping
I used the following config in ejabberd.yml file :
mod_ping:
send_pings: true
ping_ack_timeout: 32
timeout_action: kill
considering the default value of ping_interval : 60.
Ping does not seem to be working with this configuration. Am I missing any other configuration ? should the client enable something to make this working ? is there any ping log that I can check?
Note: using the modules page of the web admin of ejabberd server, the config value of the ping_ack_timeout of mod_ping seems to be different from the one in the ejabberd.yml file, why is that?
[{ping_interval,60},
{ping_ack_timeout,32000},
{send_pings,true},
{timeout_action,kill}]
Note: using the modules page of the web admin of ejabberd server, the config value of the ping_ack_timeout of mod_ping seems to be different from the one in the ejabberd.yml file, why is that?
That is expected: you set the human-configurable option in seconds, and later the internal time value is expressed in milliseconds (the time unit used by erlang).
Am I missing any other configuration ? should the client enable something to make this working ? is there any ping log that I can check?
That should be enough. Try with other clients, just to check if that affects in any way. I've installed ejabberd 18.12, configured like this:
loglevel: 5
...
mod_ping:
send_pings: true
ping_interval: 10
ping_ack_timeout: 15
timeout_action: kill
Then I start ejabberd and login with Tkaber client (but I think any client is good for testing ping). Every ten seconds, the client receives this query:
<iq to='user1#localhost/tka1'
from='user1#localhost'
type='get'
id='rr-1552642185584-13814872912241253802-5xOvCCobbU2TCC/RT4GaqD6M8bo=-55238004'>
<ping xmlns='urn:xmpp:ping'/>
</iq>
And at the same time, the ejabberd log file shows several messages, starting with this one:
10:29:30.585 [debug] route:
#iq{id = <<"rr-1552642185584-13814872912241253802-5xOvCCobbU2TCC/RT4GaqD6M8bo=-55238004">>,
type = get,lang = <<>>,
from = #jid{user = <<"user1">>,server = <<"localhost">>,resource = <<>>,
luser = <<"user1">>,lserver = <<"localhost">>,
lresource = <<>>},
to = #jid{user = <<"user1">>,server = <<"localhost">>,
resource = <<"tka1">>,luser = <<"user1">>,
lserver = <<"localhost">>,lresource = <<"tka1">>},
sub_els = [#ping{}],
meta = #{}}

Splunk Kafka Add-on doesn't read chef managed configuration files

We are using Chef to manage our infrastructure, and I'm running into an issue where the Splunk TA (Add-on for Kafka) simply refuses to acknowledge I've dropped kafka_credential.conf file in the local directory of the plugin. If I use the "Web UI", it generates an entry properly and it shows up in the add-on configuration.
[root#ip-10-14-1-42 local]# ls
app.conf inputs.conf kafka.conf kafka_credentials.conf
[root#ip-10-14-1-42 local]# grep -nr "" *.conf
app.conf:1:# MANAGED BY CHEF. PLEASE DO NOT MODIFY!
app.conf:2:[install]
app.conf:3:is_configured = 1
inputs.conf:1:# MANAGED BY CHEF. PLEASE DO NOT MODIFY!
inputs.conf:2:[kafka_mod]
inputs.conf:3:interval = 60
inputs.conf:4:start_by_shell = false
inputs.conf:5:
inputs.conf:6:[kafka_mod://my_app]
inputs.conf:7:kafka_cluster = default
inputs.conf:8:kafka_topic = log-my_app
inputs.conf:9:kafka_topic_group = my_app
inputs.conf:10:kafka_partition_offset = earliest
inputs.conf:11:index = main
kafka.conf:1:# MANAGED BY CHEF. PLEASE DO NOT MODIFY!
kafka.conf:2:[global_settings]
kafka.conf:3:log_level = INFO
kafka.conf:4:index = main
kafka.conf:5:use_kv_store = 0
kafka.conf:6:use_multiprocess_consumer = 1
kafka.conf:7:fetch_message_max_bytes = 1048576
kafka_credentials.conf:1:# MANAGED BY CHEF. PLEASE DO NOT MODIFY!
kafka_credentials.conf:2:[default]
kafka_credentials.conf:3:kafka_brokers = 10.14.2.164:9092,10.14.2.194:9092
kafka_credentials.conf:4:kafka_partition_offset = earliest
kafka_credentials.conf:5:index = main
Upon restarting splunk, the add-on is installed, and even the input is created under the Inputs section, but the cluster itself is "not available" and when examining the logs I see this:
2017-08-09 01:40:25,442 INFO pid=29212 tid=MainThread file=kafka_mod.py:main:168 | Start Kafka
2017-08-09 01:40:30,508 INFO pid=29212 tid=MainThread file=kafka_config.py:_get_kafka_clusters:228 | Clusters: {}
2017-08-09 01:40:30,509 INFO pid=29212 tid=MainThread file=kafka_config.py:__init__:188 | No Kafka cluster are configured
It seems like this plugin is only respecting clusters created through the WebUI. That is not going to work as we want to be able to fully configure this through Chef. Short of hacking the REST API, and fudging around with the .py files in the addon directory and forcing a dictionary in, what are my options?
Wondering if anyone has encountered this before.
If I had to guess it is silently rejecting the files because # is not traditionally used for comments in INI files. Try a ; instead.

Grails rest spring security plugin does not store generated token using GORM in database

I am using the GORM option to store the generated token in database for my Grails 3.x application using grails spring security rest plugin.
The application generates the token but does not get stored in database. Do we need to override the tokenStorage method and have our own implementation to store the token in database
The plugin properties configured in application.groovy are listed below
grails.plugin.springsecurity.rest.token.validation.useBearerToken = false
grails.plugin.springsecurity.rest.login.endpointUrl = '/api/login'
grails.plugin.springsecurity.rest.token.validation.headerName = 'X-Auth-Token'
grails.plugin.springsecurity.rest.token.storage.useJwt = false
grails.plugin.springsecurity.rest.token.storage.useGorm=true
grails.plugin.springsecurity.rest.token.storage.gorm.tokenDomainClassName='com.auth.AuthenticationToken'
grails.plugin.springsecurity.rest.token.storage.gorm.tokenValuePropertyName='token'
grails.plugin.springsecurity.rest.token.storage.gorm.usernamePropertyName='username'
grails.plugin.springsecurity.rest.login.passwordPropertyName = 'password'
grails.plugin.springsecurity.rest.login.useJsonCredentials = true
grails.plugin.springsecurity.rest.login.useRequestParamsCredentials = false
grails.plugin.springsecurity.rest.token.rendering.authoritiesPropertyName = 'permissions'
Make sure you have added the following to your build.gradle:
compile 'org.grails.plugins:spring-security-rest:2.0.0.M2'
compile 'org.grails.plugins:spring-security-rest-gorm:2.0.0.M2'
And you have defined the following in application.groovy or application.yml
grails.plugin.springsecurity.rest.token.storage.useGorm=true
grails.plugin.springsecurity.rest.token.storage.gorm.tokenDomainClassName = 'com.yourdomain.AuthenticationToken'
grails.plugin.springsecurity.rest.token.storage.gorm.tokenValuePropertyName = 'tokenValue'
grails.plugin.springsecurity.rest.token.storage.gorm.usernamePropertyName = 'username'
There is almost no information to help you. No build configuration, no logs, no idea how the requests are made...
But from the description of your problem, my guess is that you are missing the GORM module in your classpath. It's clearly stated in the documentation.
Be also sure to read the what's new in 2.0 chapter.
I had the same problem, token not stored and no error messages seen.
After installing the GORM plugin:
compile "org.grails.plugins:spring-security-rest-gorm:2.0.0.M2"
I could login and a token was saved into the table.

python social auth load strategy and authenticate user manually with release 0.1.26

I used python social auth for social authentication in the last 2 months and it was great.
I needed QQ support, hence installed newest git commit (23e4e289ec426732324af106c7c2e24efea34aeb - not part of a release).
until now i used to authenticate the user using the following code:
# setup redirect uri in order to load strategy
uri = redirect_uri = "social:complete"
if uri and not uri.startswith('/'):
uri = reverse(redirect_uri, args=(backend,))
# load the strategy
try:
strategy = load_strategy(
request=request, backend=backend,
redirect_uri=uri, **kwargs
)
strategy = load_strategy(request=bundle.request)
except MissingBackend:
raise ImmediateHttpResponse(HttpNotFound('Backend not found'))
# get the backend for the strategy
backend = strategy.backend
# check backend type and set token accordingly
if isinstance(backend, BaseOAuth1):
token = {
'oauth_token': bundle.data.get('access_token'),
'oauth_token_secret': bundle.data.get('access_token_secret'),
}
elif isinstance(backend, BaseOAuth2):
token = bundle.data.get('access_token')
else:
raise ImmediateHttpResponse(HttpBadRequest('Wrong backend type'))
# authenticate the user
user = strategy.backend.do_auth(token)
which worked fine.
In the latest release this behaviour has changed, and an exception is raised since the "load_strategy" method has changed.
I can't seem to find any documentation on how to do it with the new release.
Any help would be appreciated!
Omri.
The last changes in the repository changed the importance of the strategy, instead of being the main entity to perform the authentication, it's just a helper class to glue the framework with the backends. Try with this snippet to load the strategy and the backend:
from social.apps.django_app.utils import load_strategy, load_backend
strategy = load_strategy(request)
backend = load_backend(strategy, backend, uri)
...
user = backend.do_auth(token)

Docpad Plugin Contactify Issue: docpad is not defined

I am trying to get the Docpad Contactify Plugin to work as expected, but I am not having any luck and I was hoping to get some help here, if at all possible.
So the plugin in question is https://github.com/thaume/docpad-plugin-contactify and it doesn't install properly via nom, so I added it via /plugins/. Anyhow, when running it clean, I get a 'ReferenceError: docpad is not defined' caused by this line...
config = docpad.getConfig().plugins.contactify
so I changed it to...
config = #getConfig()
however then I receive the following error...
TypeError: Object function ContactifyPlugin() {
return ContactifyPlugin.__super__.constructor.apply(this, arguments);
} has no method 'getConfig'
Just looking for a way to send mail and this is the only Docpad plugin that does it, so I am kinda desperate to get it operational. Any input at all would be appreciated!
There appears to be an issue with contactify and the docpad version. I had it running under docpad 6.46 and everything seemed ok. When I updated to 6.66, contactify broke. There seems to be two relevant changes. The context of the plugin seems to have changed so that docpad is no longer directly available in the function(BasePlugin) context and docpad itself no longer has a getConfig method (instead you need to access the config property directly).
Moving the offending code inside the serverExtend method seems to fix the context issue where docpad itself is a property of the plugin this context.
ContactifyPlugin.prototype.serverExtend = function(opts) {
docpad = this.docpad;
config = docpad.config.plugins.contactify;
smtp = nodemailer.createTransport('SMTP', config.transport);
var server;
server = opts.server;
...
Coffeescript version:
serverExtend: (opts) ->
docpad = #docpad
config = docpad.config.plugins.contactify
smtp = nodemailer.createTransport('SMTP', config.transport)
{server} = opts
...