Save data in redis, docker - docker-compose

I am using docker-compose for my application. Few services to name are web, celery, redis etc.
I need to store the settings selected by user for eg
- app theme (can be light or dark)
- component: switch/ radio button
- app user defaults (eg country etc)
- component: dropdowns
I am saving these setting in redis database. so that when user visits again all custom preferences are loaded for the given user. These are individual component (no form is used), so as user click on any component respective set_function() is called. for eg
def my_baz(value):
"""Used to store app theme."""
k = "key"
v = value
r = redis.StrictRedis.from_url(some_url)
r.set(k,v)
r.bgsave()
def my_bar(value):
"""Used to store app country."""
k = "key"
v = value
r = redis.StrictRedis.from_url(some_url)
r.set(k,v)
r.bgsave()
During docker-compose up, I observed below error and then in the end all services gets up.
The error is occuring due to r.bgsave()
web | some messages till now no error
redis | 1:M 31 Jan 2023 20:46:43.231 * Background saving started by pid 13
web | status = initialize_base_settings() // this function sets some key-value pair using r.set(k,v) and saves it in redis using r.bgsave()
web | ^^^^^^^^^^^^^^^^^
redis | 13:C 31 Jan 2023 20:46:43.284 * DB saved on disk
redis | 13:C 31 Jan 2023 20:46:43.284 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
redis | 1:M 31 Jan 2023 20:46:43.329 * Background saving terminated with success
web | raise response
web | redis.exceptions.ResponseError: Background save already in progress
celery | some messages
redis | 1:M 31 Jan 2023 20:46:44.301 * Background saving started by pid 14
redis | 14:C 31 Jan 2023 20:46:44.307 * DB saved on disk
redis | 14:C 31 Jan 2023 20:46:44.308 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
celery | raise response
celery | redis.exceptions.ResponseError: Background save already in progress
.
.
few message lines // services restarted again
redis | Finally UP
web | Finally UP
celery | Finally UP
If I remove the r.bgsave() no error occurs during docker-compose up, but this way I will not be able to save the user setting in redis.
So, where should I call r.bgsave() in the app or is there any other setting I can use to save settings in redis?

Related

Fail2Ban not working on Ubuntu 16.04 (Date issues)

I have a problem with Fail2Ban
2018-02-23 18:23:48,727 fail2ban.datedetector [4859]: DEBUG Matched time template (?:DAY )?MON Day 24hour:Minute:Second(?:\.Microseconds)?(?: Year)?
2018-02-23 18:23:48,727 fail2ban.datedetector [4859]: DEBUG Got time 1519352628.000000 for "'Feb 23 10:23:48'" using template (?:DAY )?MON Day 24hour:Minute:Second(?:\.Microseconds)?(?: Year)?
2018-02-23 18:23:48,727 fail2ban.filter [4859]: DEBUG Processing line with time:1519352628.0 and ip:158.140.140.217
2018-02-23 18:23:48,727 fail2ban.filter [4859]: DEBUG Ignore line since time 1519352628.0 < 1519381428.727771 - 600
It says "ignoring Line" because the time skew is greater than the inspection period. However, this is not the case.
If indeed 1519352628.0 is derived from Feb 23, 10:23:48, then the other date: 1519381428.727771 must be wrong.
I have run tests for 'invalid user' hitting this repeatedly. But Fail2ban is always ignoring the line.
I am positive I am getting Filter Matches within 1 second.
This is Ubuntu 16.04 and Fail2ban 0.9.3
Thanks for any help you might have!
Looks like there is a time zone issue on your machine that might cause the confusion. Try to set the correct time zone and restart both rsyslogd and fail2ban.
Regarding your debug log:
1519352628.0 = Feb 23 02:23:48
-> timestamp parsed from line in log file with time Feb 23 10:23:48 - 08:00 time zone offset!
1519381428.727771 = Feb 23 10:23:48
-> timestamp of current time when fail2ban processed the log.
Coincidently this is the same time as the time in the log file. That's what makes it so confusing in this case.
1519381428.727771 - 600 = Feb 23 10:13:48
-> limit for how long to look backwards in time in the log file since you've set findtime = 10m in jail.conf.
Fail2ban 'correctly' ignores the log entry that appears to be older than 10 minutes, because of the set time zone -08:00.
btw:
If you need IPv6 support for banning, consider upgrading fail2ban to v0.10.x.
And there is also a brand new fail2ban version v0.11 (not yet marked stable, but running without issue for 1+ month on my machines) that has this wonderful new auto-increment bantime feature.

pgbouncer free_servers - how to increase them

current setting of a pgbouncer server is the following - and what I don't understand is the 'free_servers' info given by the show lists command when connecting to pgbouncer. Is it a (soft or hard) limit on the number of connexion to the postgresql databases used with this instance of pgbouncer ?
configuration :
max_client_conn = 2048
default_pool_size = 1024
min_pool_size = 10
reserve_pool_size = 500
reserve_pool_timeout = 1
server_idle_timeout = 600
listen_backlog = 1024
show lists gives :
pgbouncer=# show lists ;
list | items
---------------+--------
databases | 6
pools | 3
free_clients | 185
used_clients | 15
free_servers | 70
used_servers | 30
it seems that there is a limit at 30 + 70 = 100 servers, but couldn't find it even review configuration values with show config, and documentation doesn't explicit which configuration to change / increase free_servers.
pgbouncer version : 1.7.2
EDIT :
I've just discover that, for a pool of 6 webservers configured to hit the same PG database, 3 of them can have 200 backend connexions (server connexion), and 3 of them can only make and maintain 100 connexions (as described in the first part). But, .. the configuration is exactly the same in pgbouncer configuration file, and the servers are cloned VM. The version of pgbouncer is also the same..
So far, I still haven't found documentation on internet where this limitation come from...
This data is just some internal information for PgBouncer.
Servers information is stored inside an array list data structure which is pre-allocated up to a certain size, in this case that is 100 slots. used_servers = 30, free_servers = 70 means there are 30 slots currently in used, and 70 slots free. PgBouncer will automatically increase the size of the list when it's full, hence there's no configuration for this.

Context Broker crashing with certain update queries

We're running the Context Broker on a CentOS server but it keeps crashing with certain update queries. We've tried version 0.26 and the latest 1.0.0-1 but the result is the same, we've also tried changing the mongoDB version between 3.0.6 and 3.0.7 but no luck. The logs doesn't give us much to go on so that's why we're asking here in SO.
What we're doing is to send an update of an entity of about 1MB in size routed in from a http call via nginx. The context broker crashes (see logs below) but mongodb and other services continue to function normally.
Log file: /var/log/contextBroker/contextBroker.log
terminate called after throwing an instance of 'mongo::MsgAssertionException'
what(): EOO Before end of object
Log file: /var/log/messages
Apr 28 07:15:50 gl abrt[11457]: Saved core dump of pid 11426 (/usr/bin/contextBroker) to /var/spool/abrt/ccpp-2016-04-28-07:15:49-11426 (63606784 bytes)
Apr 28 07:15:50 gl abrtd: Directory 'ccpp-2016-04-28-07:15:49-11426' creation detected
Apr 28 07:15:50 gl abrtd: Package 'contextBroker' isn't signed with proper key
Apr 28 07:15:50 gl abrtd: 'post-create' on '/var/spool/abrt/ccpp-2016-04-28-07:15:49-11426' exited with 1
Apr 28 07:15:50 gl abrtd: Deleting problem directory '/var/spool/abrt/ccpp-2016-04-28-07:15:49-11426'
Output from the contextBroker when it's run in verbose mode:
INFO#14:05:27 logMsg.h[1792]: Starting transaction from 127.0.0.1:51245/v1/updateContext
INFO#14:05:27 connectionOperations.cpp[78]: Database Operation Successful (query: { id.id: "8a55c32500dfad.....06be56709b75b31c1f9beb7d2", id.type: "House", _id.servicePath: /^\/$/ })
terminate called after throwing an instance of 'mongo::MsgAssertionException'
what(): BSONElement: bad type 100
Any ideas about what could be causing this, or where we should continue looking?
This crash is due to a bug detected at Orion. A fix is on the way, so we hope it get merged and ready to be included in next Orion release (Orion 1.2.0).

clearcase syncreplica import error

I am trying to import 25 packets from the incoming bay on my vob server.
An lspacket of each of the packet shows that the packets are fragments from 1 to 25.
Here's an example:
multitool lspacket sync_XXX_12977
Packet is: /clearcase/shipping/ms_ship/incoming/sync_XXX_12977
Packet type: Update
Packet fragment: 1 of 25
...
multitool lspacket sync_XXX_12977_6
Packet is: /clearcase/shipping/ms_ship/incoming/sync_XXX_12977_6
Packet type: Update
Packet fragment: 6 of 25
...and so on upto _25.
So to import all the fragments/packets at once, I did a syncreplica -import [all packets from sync_XXX_12977 to sync_XXX_12977_25].
With this I get an error like:
multitool: Error: Unable to write file "/var/tmp/syncs04042": No space left on device.
Can anyone please help me with this?
Also, I should mention that incoming packets for other vobs seem to have lesser number of fragments, and they are successfully imported by the scheduled sync_receive.
I'm not sure why this error is coming only for packets for this particular vob. Could it be because of larger number of fragments?
Here is some more info about the error:
multitool: Error: Vob server operation "Create Container" failed.
Additional information may be available in the vob_log on host "VOBserver.qwerty.com"
multitool: Error: Unable to create a container in vob "/vobs/products", because group "root" not in vob's group list.
multitool: Error: Unable to replay oplog entry 927736: Not owner.
927736:
op= checkin
replica_oid= 9c863907.23ca11e2.9baf.00:01:83:db:e4:2d (ABC_SW)
oplog_id= 659061
op_time= 2014-06-06T07:18:47Z create_time= 2014-07-31T09:18:01Z
version_oid= 8426e33c.ed4b11e3.931b.00:01:83:db:e4:2d (*no view*)
event comment= "created by clearfsimport"
data size= 116 data= 0x12e108
------------
ver_oid= 8426e33c.ed4b11e3.931b.00:01:83:db:e4:2d (*no view*)
ver_num= 1
ver_fstat= ino: 0; type: 1; mode: 00
usid: DONTCARE
gsid: DONTCARE
nlink: 0; size: 130017856
atime: Thu Jan 1 05:30:00 1970
mtime: Fri Jun 6 12:48:14 2014
ctime: Fri Jun 6 12:48:14 2014
ckout_ver_oid= 8426e33c.ed4b11e3.931b.00:01:83:db:e4:2d (*no view*)
I checked vob's properties using lsvob -long and desc: the vob owner and group are CC admin and ccgrp.
If those packets are particularly big, that might explain the "No space left on device" message.
The first check is to do a:
cd /var/tmp
df -h .
And check what space you have left to work.
Once that disk space issue is fixed, you should:
get the primary group of the vob (cleartool descr -l vob:/vobs/products)
check that id -a (for ccadmin) returns that group as primary group (meaning the first group displayed by that command should be the one of the vob)

Selenium IDE gives different result when running the same code

I am new to Selenium and was just playing with the IDE. I have a website that runs locally on my machine which has a IFrame and some popups. The following code runs very well when in medium speed mode or slow mode but fails when run in fast mode it gives an error (see line 15 below). Even though I tried to keep wait statements for things to sync.
Also notice that the same code is executed just fine in line 9 whether running slow or fast.
01 open /default.aspx
02 type id=loginContent_txtPassword xxxx
03 clickAndWait id=loginContent_btnSet
04 windowFocus
05 click //div[#id='lBar_leftItem_4']/a
06 waitForFrameToLoad aframe 30000
07 selectFrame aframe
08 click css=img[title="Properties"]
09 waitForPopUp doc 30000
10 selectWindow name=doc
11 close
12 selectWindow null
13 selectFrame aframe
14 click css=img[title="Properties"]
15 waitForPopUp doc 30000 ***[error] can't access dead object**
16 selectWindow name=doc
17 verifyText id=popupContent_lblOwner XYZ*
18 close
Tried many things... but finally putting a short pause (for a few seconds) before the statements that causes the issue solved the problem. Maybe the selenium requests are not synced and it sends some requests to the browser before it can handle it (just my guess!)