django-celery-results won't recieve results - celery

I have celery setup and working together with django. I have some periodic tasks that run. The celery log shows that the tasks are executed and that they return something.
[2017-03-26 14:34:27,039: INFO/MainProcess] Received task: my_webapp.apps.events.tasks.clean_outdated[87994396-04f7-452b-a964-f6bdd07785e0]
[2017-03-26 14:34:28,328: INFO/PoolWorker-1] Task my_webapp.apps.events.tasks.clean_outdated[87994396-04f7-452b-a964-f6bdd07785e0] succeeded in 0.05246314400005758s: 'Removed 56 event(s)
| Removed 4 SGW(s)
'
But the result is not showing up on django-celery-results admin page.
These are my settings:
CELERY_BROKER_URL = os.environ.get('BROKER_URL')
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Europe/Stockholm'
CELERY_RESULT_BACKEND = 'django-cache'
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
CELERY_RESULT_DB_SHORT_LIVED_SESSIONS = True # Fix for low traffic sites like this one
I have also tried setting CELERY_RESULT_BACKEND = 'django-db'. I know the migrations are made (when using those settings), the table exists in the database:
my_webapp=> \dt
List of relations
Schema | Name | Type | Owner
--------+--------------------------------------+-------+----------------
...
public | django_celery_beat_crontabschedule | table | my_webapp
public | django_celery_beat_intervalschedule | table | my_webapp
public | django_celery_beat_periodictask | table | my_webapp
public | django_celery_beat_periodictasks | table | my_webapp
public | django_celery_results_taskresult | table | my_webapp
...
(26 rows)
Google won't give me much help, most answers is about old libraries like djcelery. Any idea how the get the results in the table?

Related

postgresql subscription not working for schemas other than public

I'm trying to create a logical replication with two local postgresql servers (node1: port 5434, node2: port 5435).
I could successfully create publication and subscription on node1 and node2 for a table in public schema.
Node1:
CREATE PUBLICATION my_pub FOR TABLE t1;
GRANT SELECT ON t1 TO repuser;
Node2:
CREATE SUBSCRIPTION my_sub CONNECTION 'host=localhost port=5434 dbname=pub user=repuser password=password' PUBLICATION my_pub;
Node2 public.t1 replicates all data in node1 public.t1.
However, my problem is when I create publication and subscription with same code but in different schema, node2 fail to replicate.
Below is output of some pg_catalog query :
Node1:
pub=# select * from pg_catalog.pg_publication_tables;
pubname | schemaname | tablename
----------+------------+-----------
my_pub | public | t1
cdl_test | cdl | t1
pub_test | test | t1
Node2:
sub=# \dRs
List of subscriptions
Name | Owner | Enabled | Publication
--------------+----------+---------+-------------
cdl_sub_test | postgres | t | {cdl_test}
my_sub | postgres | t | {my_pub}
sub_test | postgres | t | {pub_test}
sub=# select * from pg_catalog.pg_replication_origin;
roident | roname
---------+----------
2 | pg_18460
1 | pg_18461
3 | pg_18466
sub=# select * from pg_catalog.pg_subscription_rel ;
srsubid | srrelid | srsubstate | srsublsn
---------+---------+------------+------------
18461 | 16386 | r | 0/3811C810
18466 | 18463 | d |
18460 | 18456 | d |
As it is shown in select * from pg_catalog.pg_subscription_rel, two subscription for test and cdl schema are in d(data is being copied) state.
Any recommendation on how to go about this problem or diagnose why the problem occurs?
As jjanes has suggested, a snippet of the log file is shown below:
2022-01-17 16:05:25.165 PST [622] WARNING: out of logical replication worker slots
2022-01-17 16:05:25.165 PST [622] HINT: You might need to increase max_logical_replication_workers.
2022-01-17 16:05:25.168 PST [970] LOG: logical replication table synchronization worker for subscription "cdl_sub_test", table "t1" has started
2022-01-17 16:05:25.245 PST [970] ERROR: could not start initial contents copy for table "cdl.t1": ERROR: permission denied for schema cdl
2022-01-17 16:05:25.247 PST [471] LOG: background worker "logical replication worker" (PID 970) exited with exit code 12022-01-17 16:05:25.797 PST [488] postgres#sub LOG: statement: /*pga4dash*/
It seems like the subscriber doesn't have permission to read cdl schema in publisher even after I gave permission for SELECT ON cdl.t1 TO repuser;.
You have to give the user repuser permission to read the table that should be replicated. That also requires USAGE permission on the schema that contains the table.

complex canvas getting stuck in the middle

Setup:
Celery 4.1.0, broker=RabbitMQ 3.6.5, backend=Redis 3.2.5
Consider the following canvas:
celery worker -A worker.celeryapp:app -l info -Q default -c 2 -n defaultworker#%h -Ofair
#app.task(name='task_1',
bind=True,
base=MyConnectionHolderTask)
def task_1(self, feed_id, flow_id, **kwargs):
do_something()
task_1 = t_1.si(feed_id, flow_id)
.
.
task_13 = t_13.si(feed_id, flow_id)
(task_1 |
group((task_2 | group(task_3, task_4)),
task_5,
task_6,
task_7,
task_8) |
task_9 |
task_10 |
task_11 |
task_12 |
task_13).apply_async(link_error=unlock)
means I have chain of tasks which one of its tasks is a group of several tasks, and one of them is chain of size 2 (with latter task as group of 2).
Expected behavior
all task succeeded so expecting finish until task_13
Actual behavior
task_4 is the last to run. task_9 and the rest (10..13) doesn't run.
if i cancel the group of task_3 & task_4 it does work (till 13):
(task_1 |
group((task_2 | task_3 | task_4),
task_5,
task_6,
task_7,
task_8) |
task_9 |
task_10 |
task_11 |
task_12 |
task_13).apply_async(link_error=unlock)
Ref: Issue in github

cassandra 2.0.7 cql SELECT Secific Value from map

ALTER TABLE users ADD todo map;
UPDATE users SET todo = { '1':'1111', '2':'2222', '3':'3' ,.... } WHERE user_id = 'frodo';
now ,i want to run the follow cql ,but failed ,is here any other method ?
SELECT user_id, todo['1'] FROM users WHERE user_id = 'frodo';
ps:
the length my map can change. for example : { '1':'1111', '2':'2222', '3':'3' } or { '1':'1111', '2':'2222', '3':'3', '4':'4444'} or { '1':'1111', '2':'2222', '3':'3', '4':'4444' ...}
If you want to use a map collection, you'll have the limitation that you can only select the collection as a whole (docs).
I think you could use the suggestion from the referenced question, even if the length of your map changes. If you store those key/value pairs for each user_id in separate fields, and make your primary key based on user_id and todo_k, you'll have access to them in the select query.
For example:
CREATE TABLE users (
user_id text,
todo_k text,
todo_v text,
PRIMARY KEY (user_id, todo_k)
);
-----------------------------
| user_id | todo_k | todo_v |
-----------------------------
| frodo | 1 | 1111 |
| frodo | 2 | 2222 |
| sam | 1 | 11 |
| sam | 2 | 22 |
| sam | 3 | 33 |
-----------------------------
Then you can do queries like:
select user_id,todo_k,todo_v from users where user_id = 'frodo';
select user_id,todo_k,todo_v from users where user_id = 'sam' and todo_k = 2;

Update a single value in a database table through form submission

Here is my table in the database :
id | account_name | account_number | account_type | address | email | ifsc_code | is_default_account | phone_num | User
-----+--------------+----------------+--------------+---------+------------------------------+-----------+--------------------+-------------+----------
201 | helloi32irn | 55265766432454 | Savings | | mypal.appa99721989#gmail.com | 5545 | f | 98654567876 | abc
195 | hello | 55265766435523 | Savings | | mypal.1989#gmail.com | 5545 | t | 98654567876 | axyz
203 | what | 01010101010101 | Current | | guillaume#sample.com | 6123 | f | 09099990 | abc
On form submission in the view, which only posts a single parameter which in my case is name= "activate" which corresponds to the column "is_default_account" in the table.
I want to change the value of "is_default_account" from "t" to "f". For example here in the table, for account_name "hello" it is "t". And i want to deactivate it, i.e make it "f" and activate any of the other that has been sent trough the form
This will update your table and make account 'what' default (assuming that is_default_account is BOOLEAN field):
UPDATE table
SET is_default_account = (account_name = 'what')
You may want limit updates if table is more than just few rows you listed, like this:
UPDATE table
SET is_default_account = (account_name = 'what')
WHERE is_default_account != (account_name = 'what')
AND <limit updates by some other criteria like user name>
I think to accomplish what you want to do you should send at least two values from the form. One for the id of the account you want to update and the other for the action (activate here). You can also just send the id and have it toggle. There are many ways to do this but I can't figure out exactly what you are trying to do and whether you want SQL or Playframework code. Without limiting your update in somewhere (like id) you can't precisely control what specific rows get updated. Please clarify your question and add some more code if you want help on the playframework side, which I would think you do.

Zend DB inserting relational data

I'm using the Zend Framework database relationships for a couple of weeks now. My first impression is pretty good, but I do have a question related to inserting related data into multiple tables. For a little test application I've related two tables with each other by using a fuse table.
+---------------+ +---------------+ +---------------+
| Pages | | Fuse | | Prints |
+---------------+ +---------------+ +---------------+
| pageid | | fuseid | | printid |
| page_active | | fuse_page | | print_title |
| page_author | | fuse_print | | print_content |
| page_created | | fuse_locale | | ... |
| ... | | ... | +---------------+
+---------------+ +---------------+
Above is an example of my DB architecture
Now, my problem is how to insert related data to two separate tables and insert the two newly created ID's into the fuse table at the same time. If someone could could maybe explain or give me a topic related tutorial. I would appreciate it!
I assume you got separate models for each table. Then simply insert stuff in Prints table, store returned ID in variable. Then insert stuff in Pages table and store returned ID in another varialble. Eventually insert data in your Fuse table. You do not need any "at the same time" (atomic) operation here. ID of newly inserted rows are returned by save() (I assume you use autoincrement fields for this).
$printsModel = new Application_Model_Prints();
$pagesModel = new Application_Model_Pages();
$fuseModel = new Application_Model_Fuse();
$printData = array('print_title'=>'foo',
...);
$printId = $printsModel->insert( $printData );
$pagesData = array('page_author'=>'bar',
...);
$pageId = $pagesModel->insert($pagesData);
$fuseData = array('fuse_page' => $pageId,
'fuse_print' => $printId,
...);
$fuseId = $fuseModel->insert($fuseData);
thus is pseudo code, so you may want to move inserts into your models and do somoe i.e. normalisation etc.
I also suggest paying more attention to fields naming convention. It usually helps and now you got fuseid but also fuse_page. So it either should be fuse_id or fusepage (not to mention I suspect this field stores id so it would be fuse_page_id or fusepageid).
Prints and Pages are two entities . Create row clases for each
class Model_Page extends Zend_Db_Table_Row_Abastract
{
public function addPrint($print)
{
$fuseTb = new Table_Fuse();
$fuse = $fuseTb->createRow();
$fuse->fuse_page = $this->pageid;
$fuse->fuse_print = $print->printid;
$fuse->save();
return $fuse;
}
}
Now when you create page
$page = $pageTb->createRow() ; //instance of Model_Page is returned
$page->addPrint($printTb->find(1)->current());