I have been a few days trying to automate the deletion of annoying bot's messages in my web page. I have set Google recaptcha and they have slowed a lot, however, some bots still post messages and I'd like to automate something in my phpMyAdmin database to automate the deletion. In example: if they say 'buy my pills', I am trying to say: 'If my table has pills word in this column, delete the row.'
So far I have tried 2 methods in my table 'commentsmain' with no success:
TRIGGER (Just impedes every single message from being posted)
DELETE FROM commentsmain WHERE message LIKE '%pills%'
This is the whole trigger code: DROP TRIGGER IF EXISTS spamMain;CREATE DEFINER=u956484391_database#127.0.0.1 TRIGGER spamMain AFTER INSERT ON commentsmain FOR EACH ROW DELETE FROM commentsmain WHERE message LIKE '%HTTP%'
CHECK CONSTRAIN (Just say the code is incorrect)
a) ALTER TABLE commentsmain
ADD CHECK (message NOT LIKE '%pills%');
b) ADD CONSTRAINT checkHTTP CHECK (message NOT LIKE '%pills%');
I expect to automate the deletion of bots messages in my phpMyAdmin pages
Related
I'm currently building a forum alike application. Users will be able to see recent posts with the total like count. If the post is interesting to the user, they can like it as well and contribute to the total like count.
The normalized approach would be to have two tables: user_post(contains id, metadata ...), liked_post(which includes the user id + post id). When posts are getting queried, the like count would be determined with the COUNT() statement on the liked_post table grouped by the post id.
Im thinking of another approach, which requires no group by on a potential huge table. That would be to add a like_count column to the user_post table and break the normalization. This column would be always updated when a new liked_post entry gets inserted or deleted. That means: Every time a user likes a post -> there will be an update on the user_post table (increment the like_count column) + insert/delete entity in liked_post table (With a trigger or code in App layer).
Would this aggregation on the fly approach have any disadvantages, except for consistency concerns? This would enable very simple and fast select queries but Im not sure if the additional update would be an issue.
What are your thoughts ?
Im really interested in the performance impact and not if you should do this from the project begin or not.
Your idea is correct and widely used. Problem that you will face:
how do you make sure that like_count is valid? Can this number be delayed or approximated somehow?
In general you can do this following ways
update like_count within application code
update like_count by triggers
If you want to have exact values correct you could accumulate those sums by triggers or do it programatically ensuring that like count update is always within same transaction that insert to liked_posts
Using triggers it could be something like this:
CREATE FUNCTION public.update_like_count() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
UPDATE user_post SET user_post.liked_count = user_post.liked_count + 1
WHERE user_post.id = NEW.post_id;
RETURN NEW;
END;
$$;
CREATE TRIGGER update_like_counts
AFTER INSERT ON public.liked_posts
FOR EACH ROW EXECUTE PROCEDURE public.update_like_count();
Also you should handle AFTER DELETE by separate trigger.
Be aware that depending on transaction isolation level you might enter concurrency problem here (if 2 inserts are done at the same time - like_count may be exactly same number for two transactions) and end up with invalid total.
So I've had a problem similar to this in the past, the solution I went with is similar to what you've described, which is having an aggregated stored value like_count. Like you mentioned the only downside would be consistency concerns however this problem exists even in the former.
The solution to something like this lies more in the application dev, so utilizing something like web-sockets to keep posts up to date, without too much fluff
When a user's browser/client loads a post they join a room with the post id, and when user interacts with a post ( like, dislike etc ) that interaction is broadcasted to all users in that room ( post id ).
Finally when it comes to finding out which users liked this post, you can query/load at the point of when the user clicks to find out. ~ cheers
I am creating a data flow task which will be extracting data from a source table and will be updating a destination table as follows:
1) Use the unique id in the source record to find the record you want to update in the destination table.
2) If the ID does not exist in the destination table, check whether the email of the source record exists in the destination table instead.
a) If the email exists, update the destination record through the email. Also update the unique id of that destination record.
b) If the email does not exist, insert a new record to the destination table.
So, with simple words, I am creating a task that will be updating a table on its unique id and if it does not have a match, it will be attempting to update on its email. If it still does not find a match, it will be inserting a new record.
This means that I will have two updates running in parallel as you can see in the image (the two circled components will be running in parallel)
SSIS_Data_Flow_Task
Now, this generates a deadlock issue because of those two updates.
I have tried using With (NOLOCK) but this hint is for reading data, not updating it. I also have searched for delay tasks to delay one of the two data pipelines until the other is finished.
Any ideas? Could I maybe design my data flow task differently in order to avoid having multiple parallel updates in the first place?
Any help will be greatly appreciated.
with these type of flows i always work with a work table (dest table id, work type (U or I), ...). in a first step i fill a table with work that needs to be done, then i apply the work.
I am referencing the 2 step newsletter example at http://agiletoolkit.org/codepad/newsletter. I modified the example into a 4 step process. The following page class is step 1, and it works to insert a new record and get the new record id. The problem is I don't want to insert this record into the database until the final step. I am not sure how to retrieve this id without using the save() function. Any ideas would be helpful.
class page_Ssp_Step1 extends Page {
function init(){
parent::init();
$p=$this;
$m=$p->add(Model_Publishers);
$form=$p->add('Form');
$form->setModel($m);
$form->addSubmit();
if($form->isSubmitted()){
$m->save();//inserts new record into db.
$new_id=$m->get('id');//gets id of new record
$this->api->memorize('new_id',$new_id);//carries id across pages
$this->js()->atk4_load($this->api->url('./Step2'))->execute();
}
}
}
There are several ways you could do this, either using atk4 functionality, mysql transactions or as a part of the design of your application.
1) Manage the id column yourself
I assume you are using an auto increment column in MySQL so one option would be to not make this auto increment but use a sequence and select the next value and save this in your memorize statement and add it in the model as a defaultValue using ->defaultValue($this->api->recall('new_id')
2) Turn off autocommit and create a transaction around the inserts
I'm from an oracle background rather than MySQL but MySQL also allows you to wrap several statements in a transaction which either saves everything or rollsback so this would also be an option if you can create a transaction, then you might still be able to save but only a complete transaction populating several tables would be committed if all steps complete.
In atk 4.1, the DBlite/mysql.php class contains some functions for transaction support but the documentation on agiletoolkit.org is incomplete and it's unclear how you change the dbConnect being used as currently you connect to a database in lib/Frontend.php using $this->dbConnect() but there is no option to pass a parameter.
It looks like you may be able to do the needed transaction commands using this at the start of the first page
$this->api->db->query('SET AUTOCOMMIT=0');
$this->api->db->query('START TRANSACTION');
then do inserts in various pages as needed. Note that everything done will be contained in a transaccion so if the user doesnt complete the process, nothing will be saved.
On the last insert,
$this->api->db->query('COMMIT');
Then if you want to, turn back on autocommit so each SQL statement is committed
$this->api->db->query('SET AUTOCOMMIT=1');
I havent tried this but hopefully that helps.
3) use beforeInsert or afterInsert
You can also look at overriding the beforeInsert function on your model which has an array of the data but I think if your id is an auto increment column, it won't have a value until the afterInsert function which has a parameter of the Id inserted.
4) use a status to indicate complete record
Finally you could use a status column on your record to indicate it is only at the first stage and this only gets updated to a complete status when the final stage is completed. Then you can have a housekeeping job that runs at intervals to remove records that didn't complete all stages. Any grid or crud where you display these records would be limited with AddCondition('status','C') in the model or added in the page so that incomplete ones never get shown.
5) Manage the transaction as non sql
As suggested by Romans, you could store the result of the form processing in session variables instead of directly into the database and then use a SQL to insert it once the last step is completed.
I have two tables in APEX that are linked by their primary key. One table (APEX_MAIN) holds the basic metadata of a document in our system and the other (APEX_DATES) holds important dates related to that document's processing.
For my team I have created a contrl panel where they can interact with all of this data. The issue is that right now they alter the information in APEX_MAIN on a page then they alter APEX_DATES on another. I would really like to be able to have these forms on the same page and submit updates to their respective tables & rows with a single submit button. I have set this up currently using two different regions on the same page but I am getting errors both with the initial fetching of the rows (Which ever row is fetched 2nd seems to work but then the page items in the form that was fetched 1st are empty?) and with submitting (It give some error about information in the DB having been altered since the update request was sent). Can anyone help me?
It is a limitation of the built-in Apex forms that you can only have one automated row fetch process per page, unfortunately. You can have more than one form region per page, but you have to code all the fetch and submit processing yourself if you do (not that difficult really, but you need to take care of optimistic locking etc. yourself too).
Splitting one table's form over several regions is perfectly possible, even using the built-in form functionality, because the region itself is just a layout object, it has no functionality associated with it.
Building forms manually is quite straight-forward but a bit more work.
Items
These should have the source set to "Static Text" rather than database column.
Buttons
You will need button like Create, Apply Changes, Delete that submit the page. These need unique request values so that you know which table is being processed, e.g. CREATE_EMP. You can make the buttons display conditionally, e.g. Create only when PK item is null.
Row Fetch Process
This will be a simple PL/SQL process like:
select ename, job, sal
into :p1_ename, :p1_job, :p1_sal
from emp
where empno = :p1_empno;
It will need to be conditional so that it only fires on entry to the form and not after every page load - otherwise if there are validation errors any edits will be lost. This can be controlled by a hidden item that is initially null but set to a non-null value on page load. Only fetch the row if the hidden item is null.
Submit Process(es)
You could have 3 separate processes for insert, update, delete associated with the buttons, or a single process that looks at the :request value to see what needs doing. Either way the processes will contain simple DML like:
insert into emp (empno, ename, job, sal)
values (:p1_empno, :p1_ename, :p1_job, :p1_sal);
Optimistic Locking
I omitted this above for simplicity, but one thing the built-in forms do for you is handle "optimistic locking" to prevent 2 users updating the same record simultaneously, with one's update overwriting the other's. There are various methods you can use to do this. A common one is to use OWA_OPT_LOCK.CHECKSUM to compare the record as it was when selected with as it is at the point of committing the update.
In fetch process:
select ename, job, sal, owa_opt_lock.checksum('SCOTT','EMP',ROWID)
into :p1_ename, :p1_job, :p1_sal, :p1_checksum
from emp
where empno = :p1_empno;
In submit process for update:
update emp
set job = :p1_job, sal = :p1_sal
where empno = :p1_empno
and owa_opt_lock.checksum('SCOTT','EMP',ROWID) = :p1_checksum;
if sql%rowcount = 0 then
-- handle fact that update failed e.g. raise_application_error
end if;
Another, easier solution for the fetching part is creating a view with all the feilds that you need.
The weak point is it that you later need to alter the "submit" code to insert to the tables that are the source for the view data
We have a table called Contracts. These contract records are created by users on an external site and must be approved or rejected by staff on an internal site. When a contract is rejected, it's simply deleted from the db. When it's accepted, however, a new record is generated called Contract Acceptance which is written to its own table and is derived from data that exists on the contract.
The problem is that two internal staff members may each end up opening the same contract. The first user accepts and a contract acceptance record is generated. Then, with the same contract record still open on the page, the second user accepts the contract again, creating a duplicate acceptance record.
The quick and dirty way to get past this is to retrieve the contract from the db just before it's accepted, check the status, and produce an error message saying that it's already been accepted. This would probably work for most circumstances, but the users could still click the Accept button at the exact same time and sneak by this validation code.
I've also considered a thread lock deep in the data layer that prevents two threads from entering the same region of code at the same time, but the app exists on two load-balanced servers, so the users could be on separate servers which would render this approach useless.
The only method I can think of would have to exist at the database. Conceptually, I would like to somehow lock the stored procedure or table so that it can't be updated twice at the same time, but perhaps I don't understand Oracle enough here. How do updates work? Are update requests somehow queued up so that they do not occur at the exact same time? If this is so, I could check the status of the record in th SQL and return a value in an out parameter stating it has already been accepted. But if update requests aren't queued then two people could still get into the update sql at the exact same time.
Looking for good suggestions on how to go about this.
First, if there can only be one Contract Acceptance per Contract, then Contract Acceptance should have the Contract ID as its own primary (or unique) key: that will make duplicates impossible.
Second, to prevent the second user from trying to accept the contract while the first user is accepting it, you can make the acceptance process lock the Contract row:
select ...
from Contract
where contract_id = :the_contract
for update nowait;
insert into Contract_Acceptance ...
The second user's attempt to accept will then fail with an exception :
ORA-00054: resource busy and acquire with nowait specified
In general, there are two approaches to the problem
Option 1: Pessimistic Locking
In this scenario, you're pessimistic so you lock the row in the table when you select it. When a user queries the Contracts table, they'd do something like
SELECT *
FROM contracts
WHERE contract_id = <<some contract ID>>
FOR UPDATE NOWAIT;
Whoever selects the record first will lock it. Whoever selects the record second will get an ORA-00054 error that the application will then catch and let them know that another user has already locked the record. When the first user completes their work, they issue their INSERT into the Contract_Acceptance table and commit their transaction. This releases the lock on the row in the Contracts table.
Option 2: Optimistic Locking
In this scenario, you're being optimistic that the two users won't conflict so you don't lock the record initially. Instead, you select the data you need along with a Last_Updated_Timestamp column that you add to the table if it doesn't already exist. Something like
SELECT <<list of columns>>, Last_Updated_Timestamp
FROM Contracts
WHERE contract_id = <<some contract ID>>
When a user accepts the contract, before doing the INSERT into Contract_Acceptance, they issue an UPDATE on Contracts
UPDATE Contracts
SET last_updated_timestamp = systimestamp
WHERE contract_id = <<some contract ID>>
AND last_update_timestamp = <<timestamp from the initial SELECT>>;
The first person to do this update will succeed (the statement will update 1 row). The second person to do this will update 0 rows. The application detects the fact that the update didn't modify any rows and tells the second user that someone else has already processed the row.
In Either Case
In either case, you probably want to add a UNIQUE constraint to the Contract_Acceptance table. This will ensure that there is only one row in the Contract_Acceptance table for any given Contract_ID.
ALTER TABLE Contract_Acceptance
ADD CONSTRAINT unique_contract_id UNIQUE (Contract_ID)
This is a second line of defense that should never be needed but protects you in case the application doesn't implement its logic correctly.