Latest shiro release broke my webapp [ shiro-all-1.5.1.jar ] - shiro

I upgraded my webapp with the latest shiro release 1.5.1 and suddenly it doesn't work anymore. Here the log error:
GRAVE: Shiro environment initialization failed
java.lang.NoClassDefFoundError: org/apache/shiro/cache/CacheManagerAware
and here's my shiro.ini that seems to be the culprit:
[main]
jdbcRealm = org.apache.shiro.realm.jdbc.JdbcRealm
jdbcRealm.permissionsLookupEnabled = true
ds = com.mysql.cj.jdbc.MysqlDataSource
ps = org.apache.shiro.authc.credential.DefaultPasswordService
pm = org.apache.shiro.authc.credential.PasswordMatcher
jdbcRealmCredentialsMatcher = org.apache.shiro.authc.credential.Sha256CredentialsMatcher
ds.serverName = localhost
ds.serverTimezone=Europe/Berlin
ds.databaseName = ******
ds.user = *******
ds.password = ********
jdbcRealm.credentialsMatcher = $jdbcRealmCredentialsMatcher
jdbcRealm.dataSource = $ds
pm.passwordService = $ps
jdbcRealm.credentialsMatcher = $pm
shiro.loginUrl = /login.jsp
shiro.postOnlyLogout = true
securityManager.realms = $jdbcRealm
securityManager.rememberMeManager.cipherKey = kPH+bIxk5D2deZiIxcaaaA==
When I go back to the shiro-all-1.4.2.jar version everything is working fine again.
I have even tried to add these 2 lines to my shiro.ini main section but they didn't fix the problem
cacheManager = org.apache.shiro.cache.MemoryConstrainedCacheManager
securityManager.cacheManager = $cacheManager

I can confirm the issue, this should be fixed in the next release.
https://issues.apache.org/jira/browse/SHIRO-749
In addition, the "all" module will likely be deprecated in the future, I'd strongly advise against using them but instead using just the modules you need (for example maybe you just need shiro-web.
The temporary fix is:
a.) also add a dependency to shiro-cache
b.) remove the usage of shiro-all by using the module you need (shiro-web, shiro-guice, etc)
Either way, thanks for the report, we well get this fixed soon!

Related

Apache shiro ini file for mongodb

I was able to configure shiro.ini for mariadb. How shall i configure shiro.ini for MongoDB?
I have tried configuring for mariadb, which is working fine.
jdbcRealm = org.apache.shiro.realm.jdbc.JdbcRealm
jdbcRealm.permissionsLookupEnabled = false
jdbcRealm.authenticationQuery = SELECT Password FROM User WHERE Name = ?
ds = org.mariadb.jdbc.MariaDbDataSource
ds.serverName = localhost
ds.user = xxxx
ds.password = xxxx
ds.databaseName = xxxx
jdbcRealm.dataSource = $ds
securityManager.realms = $jdbcRealm
securityManager.sessionManager.globalSessionTimeout = 6000
Unable to get datasource and realm for mongoDB.
You would either need to use a Mongo JDBC lib, or create a custom realm with a Mongo client. There are also a few Shiro Mongo Realms in the community.

Zope/Plone code reload after deployment on production

Is there a way to reload the code without restarting Zope when in Production ?
New features are implemented almost once in 2 days and have to be uploaded to the server. The only way it works currently is by restarting the zeo server and all instances. Can't use "plone.reload" as it only works in the development environment when the debug mode is on. Below is the buildout.cfg content
[buildout]
parts =
# instance
zeo
client1
client2
client3
zopepy
zopeskel
test
# mysql
# varnish-build
# varnish
supervisor
pidproxy
extends =
https://dist.plone.org/versions/zope-2-13-19-versions.cfg
find-links =
https://dist.plone.org/release/4.2.4
https://dist.plone.org/thirdparty
extensions =
mr.developer
# buildout.dumppickedversions
sources = sources
versions = versions
develop =
[versions]
plone.recipe.zeoserver = 1.3.1
plone.recipe.zope2instance = 4.2.8
five.localsitemanager = 2.0.5
Products.PluginRegistry = 1.3
Products.CMFCore = 2.2.7
Products.GenericSetup = 1.7.3
Products.ZSQLMethods = 2.13.4
zope.interface = 3.6.7
zope.app.publication = 3.12.0
#setuptools = 17.1.1
funcsigs = 0.4
openpyxl = 2.4.0
plone.reload = 2.0.2
[zeo]
recipe = plone.recipe.zeoserver
zeo-address = 127.0.0.1:9100
zeo-var = ${buildout:directory}/var
blob-storage = ${zeo:zeo-var}/blobstorage
#ggs = plone.app.blob
[client1]
recipe = plone.recipe.zope2instance
http-address = 9081
zeo-client = on
zeo-address = ${zeo:zeo-address}
shared-blob = on
blob-storage = ${zeo:zeo-var}/blobstorage
user = admin:Slick_RP#21!
products = ${buildout:directory}/matrix_git/prod/
debug-mode = off
verbose-security = off
eggs =
# pillow
mysql-python
simplejson
haversine
openpyxl
requests
httpagentparser
ordereddict
python-memcached
# python-crontab
# setuptools
Products.CMFCore
Products.ZMySQLDA
# Products.SQLAlchemyDA
Products.PluggableAuthService
# Products.ZopeProfiler
# Products.MemoryProfiler
# reportlab
Products.BeakerSessionDataManager
collective.fsexternalmethod
plone.reload
zope-conf-additional =
extensions ${buildout:directory}/matrix_git/Extensions
<product-config beaker>
session.type file
session.data_dir ${buildout:directory}/var/sessions/data
session.lock_dir ${buildout:directory}/var/sessions/lock
session.key beaker.session
session.secret secret
</product-config>
zcml =
collective.fsexternalmethod
plone.reload
event-log-max-size = 5 MB
event-log-old-files = 5
access-log-max-size = 20 MB
access-log-old-files = 10
[client2]
recipe = plone.recipe.zope2instance
http-address = 9082
zeo-client = ${client1:zeo-client}
zeo-address = ${client1:zeo-address}
blob-storage = ${client1:blob-storage}
shared-blob = ${client1:shared-blob}
user = ${client1:user}
products = ${client1:products}
debug-mode = off
verbose-security = off
eggs = ${client1:eggs}
zcml = ${client1:zcml}
zope-conf-additional = ${client1:zope-conf-additional}
event-log-max-size = ${client1:event-log-max-size}
event-log-old-files = ${client1:event-log-old-files}
access-log-max-size = ${client1:access-log-max-size}
access-log-old-files = ${client1:access-log-old-files}
[client3]
recipe = plone.recipe.zope2instance
http-address = 9083
zeo-client = ${client1:zeo-client}
zeo-address = ${client1:zeo-address}
blob-storage = ${client1:blob-storage}
shared-blob = ${client1:shared-blob}
user = ${client1:user}
products = ${client1:products}
debug-mode = off
verbose-security = off
eggs = ${client1:eggs}
zcml = ${client1:zcml}
zope-conf-additional = ${client1:zope-conf-additional}
event-log-max-size = ${client1:event-log-max-size}
event-log-old-files = ${client1:event-log-old-files}
access-log-max-size = ${client1:access-log-max-size}
access-log-old-files = ${client1:access-log-old-files}
[zopepy]
recipe = zc.recipe.egg
eggs = ${client1:eggs}
interpreter = zopepy
scripts = zopepy
[test]
recipe = zc.recipe.testrunner
defaults = ['--auto-color', '--auto-progress']
eggs =
${client1:eggs}
[zopeskel]
recipe = zc.recipe.egg
eggs =
ZopeSkel
PasteScript
[mysql]
recipe = zest.recipe.mysql
# Note that these urls usually stop working after a while... thanks...
mysql-url = http://downloads.mysql.com/archives/mysql-5.0/mysql-5.0.86.tar.gz
mysql-python-url = http://pypi.python.org/packages/source/M/MySQL-python/MySQL-python-1.2.3.tar.gz
[varnish-build]
recipe = zc.recipe.cmmi
url = ${varnish:download-url}
[varnish]
recipe = plone.recipe.varnish
daemon = ${buildout:parts-directory}/varnish-build/sbin/varnishd
bind = 127.0.0.1:8000
backends = 127.0.0.1:8080
cache-size = 50M
[pidproxy]
recipe = zc.recipe.egg
eggs = supervisor
scripts = pidproxy
[supervisor]
recipe = collective.recipe.supervisor
port = 127.0.0.1:24007
serverurl = http://127.0.0.1:24007
programs =
# 10 mysql ${buildout:directory}/bin/pidproxy [${buildout:directory}/var/mysql/mysql.pid ${buildout:directory}/parts/mysql/install/bin/mysqld_safe --pid-file=${buildout:directory}/var/mysql/mysql.pid --socket=${buildout:directory}/var/mysql.socket] ${buildout:directory} true
20 zeo ${buildout:directory}/bin/zeo [console] ${buildout:directory} true
30 client1 ${buildout:directory}/bin/client1 [console] ${buildout:directory} true
40 client2 ${buildout:directory}/bin/client2 [console] ${buildout:directory} true
50 client3 ${buildout:directory}/bin/client3 [console] ${buildout:directory} true
If you are deploying so frequently, you can either deploy at low traffic times (i.e. at night).
If the website should be always up, you could have two sets of Plone instances: one set is active and serving requests, the second one is not active.
When updating, the offline servers are updated and when they are done, a switch is turned (HAProxy for example) to replace the active servers.
You could even have all servers available always, but for updating, put some offline while they are updated.
As others, and you as well are pointing, I would never use plone.reload or similar development tools in production.
Yes there is a way, allthough I'd never do that in production it's a great time-saver when developing, to do a reload within a browser-view:
from plone.reload.code import reload_code
from Products.Five.browser import BrowserView
class View(BrowserView):
def __call__(self):
reload_code()
return 'Code loaded.'
Then call the view with the name you registered it with upon the site. This even works in non-debug-mode while the instance is running in background. Tested with a standalone instance (non-ZEO).

Not able to connect to MongoDB with Auth - FIWARE Cygnus

We have been trying today to put a Cygnus container in production and we haven't been able to connect it to MongoDB. In our case, we have installed MongoDB with the Auth flag, and we created different users in order to test everything work.
However, we didn't find out the way to connect Cygnus. It tries to connect to the sth_default database, but the it requires enough privileges to create other databases.
The workaround was to start the MongoDB service without the Auth flag, allowing us to check that everything worked when the user can access with admin user without login in, which is not the way we would like to work, due to the fact that it is insecure.
Are we missing anything?
Thanks in advance!
UPDATE
I'm adding here the Cygnus agent.conf file. Moreover, I'm using the Docker Image (docker-ngsi: https://hub.docker.com/r/fiware/cygnus-ngsi/) in its latest version.
cygnus-ngsi.sources = http-source
# Using both, Mongo and Postgres sinks
cygnus-ngsi.sinks = mongo-sink postgresql-sink
cygnus-ngsi.channels = mongo-channel postgresql-channel
cygnus-ngsi.sources.http-source.type = org.apache.flume.source.http.HTTPSource
cygnus-ngsi.sources.http-source.channels = mongo-channel postgresql-channel
cygnus-ngsi.sources.http-source.port = 5050
cygnus-ngsi.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.NGSIRestHandler
cygnus-ngsi.sources.http-source.handler.notification_target = /notify
cygnus-ngsi.sources.http-source.handler.default_service = default
cygnus-ngsi.sources.http-source.handler.default_service_path = /
cygnus-ngsi.sources.http-source.interceptors = ts gi
cygnus-ngsi.sources.http-source.interceptors.ts.type = timestamp
cygnus-ngsi.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.NGSIGroupingInterceptor$Builder
cygnus-ngsi.sources.http-source.interceptors.gi.grouping_rules_conf_file = /opt/apache-flume/conf/grouping_rules.conf
cygnus-ngsi.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.NGSIMongoSink
cygnus-ngsi.sinks.mongo-sink.channel = mongo-channel
#cygnus-ngsi.sinks.mongo-sink.enable_encoding = false
#cygnus-ngsi.sinks.mongo-sink.enable_grouping = false
#cygnus-ngsi.sinks.mongo-sink.enable_name_mappings = false
#cygnus-ngsi.sinks.mongo-sink.enable_lowercase = false
#cygnus-ngsi.sinks.mongo-sink.data_model = dm-by-entity
#cygnus-ngsi.sinks.mongo-sink.attr_persistence = row
cygnus-ngsi.sinks.mongo-sink.mongo_hosts = MyIP:MyPort
cygnus-ngsi.sinks.mongo-sink.mongo_username = MyUsername
cygnus-ngsi.sinks.mongo-sink.mongo_password = MyPassword
#cygnus-ngsi.sinks.mongo-sink.db_prefix = sth_
#cygnus-ngsi.sinks.mongo-sink.collection_prefix = sth_
#cygnus-ngsi.sinks.mongo-sink.batch_size = 1
#cygnus-ngsi.sinks.mongo-sink.batch_timeout = 30
#cygnus-ngsi.sinks.mongo-sink.batch_ttl = 10
#cygnus-ngsi.sinks.mongo-sink.data_expiration = 0
#cygnus-ngsi.sinks.mongo-sink.collections_size = 0
#cygnus-ngsi.sinks.mongo-sink.max_documents = 0
#cygnus-ngsi.sinks.mongo-sink.ignore_white_spaces = true
Thanks
The following configuration lines are missing:
cygnus-ngsi.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.NGSIMongoSink
cygnus-ngsi.sinks.mongo-sink.channel = mongo-channel
I.e. you have to specify the Java class implementing the MongoDB sink, and the channel that connects the source with such a sink.
If the configuration you are showing is the default one when Cygnus is installed through Docker, then the development team must be warned.

Can't access Interpreter settings in Zeppelin

I'm using Zeppelin in my Hortonworks Data Platform 2.5 cluster.
Since I set zeppelin.anonymous.allowed=false I'm not able to enter my interpreter settings anymore - The interpreter screen is empty, see this screenshot:
My shiro_ini_content contains the following [users], [roles] and [urls] settings:
[users]
admin = passw0rd, administrator
[main]
shiro.loginUrl = /api/login
[roles]
administrator = *
[urls]
/api/version = anon
#/** = anon
/** = authc
/api/interpreter/** = authc, roles[administrator]
/api/configurations/** = authc, roles[administrator]
/api/credential/** = authc, roles[administrator]
I made the settings based on the following manual: https://shiro.apache.org/configuration.html#Configuration-%5Croles%5C
Why am I still unable to access the Interpreter settings?
It need also sessionManager settings in your [main] section like below.
[main]
shiro.loginUrl = /api/login
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
securityManager.sessionManager = $sessionManager
# 86,400,000 milliseconds = 24 hour
securityManager.sessionManager.globalSessionTimeout = 86400000
Could you try?

GitLab "Reply-To" feature using Omnibus not working?

We currently are running the latest version of GitLab (v8.0.1) which is installed using the Omnibus package and trying to enable the new "reply-to" feature but nothing is happening.
We followed these instructions:
http://doc.gitlab.com/ce/incoming_email/README.html (specifically the Gmail instructions). We configured a new Gmail account with lesser-security and we also use the SMTP configuration.
The email, when replied to, is being sent to the GMail account but from there nothing is happening. The doco seems a little sparse but is GitLab supposed to pick that email up (via IMAP) and update the issue? If so, nothing is happening.
Our settings in the /etc/gitlab/gitlab.rb (and I had to add the "incoming-mail" section manually because it was not there) looks like this:
# SMTP setup
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "aws"
gitlab_rails['smtp_port'] = 587
gitlab_rails['smtp_user_name'] = "AWSUSER"
gitlab_rails['smtp_password'] = "AWSPASS"
gitlab_rails['smtp_domain'] = "git.ourdomain.com"
gitlab_rails['smtp_authentication'] = "login"
gitlab_rails['smtp_enable_starttls_auto'] = true
# gitlab_rails['smtp_tls'] = false
# gitlab_rails['smtp_openssl_verify_mode'] = 'none' # Can be: 'none', 'peer', 'client_once', 'fail_if_no_peer_cert', see http://api.rubyonrails.org/classes/ActionMailer/Base.html
# gitlab_rails['smtp_ca_path'] = "/etc/ssl/certs"
# gitlab_rails['smtp_ca_file'] = "/etc/ssl/certs/ca-certificates.crt"
# Configuration for Gmail / Google Apps, assumes mailbox gitlab-incoming#gmail.com
gitlab_rails['incoming_email_enabled'] = true
gitlab_rails['incoming_email_address'] = "gitlab+%{key}#ourdomain.com"
gitlab_rails['incoming_email_email'] = "gitlab#ourdomain.com"
gitlab_rails['incoming_email_password'] = "GLPASS"
gitlab_rails['incoming_email_host'] = "imap.gmail.com"
gitlab_rails['incoming_email_port'] = 993
gitlab_rails['incoming_email_ssl'] = true
gitlab_rails['incoming_email_start_tls'] = false
gitlab_rails['incoming_email_mailbox_name'] = "inbox"
For me installing the last update and restarting the server seemed to solve the problem (I did restart the server the first time as well but it still was not working).