Deleting BlogEngine.NET comments from the database - tsql

I've searched through all the questions tagged blogengine.net and not found an answer here. I set up a site with BlogEngine.NET. At the time I never configured for spam comment purposes the spam settings (figuring that, dontcha know, I was going to write such earth-shatteringly good content that all the comments would be just affirmations of how great my content was, with the occasional pithy insight)
Turns out the blog is more of a notebook for myself rather than anything I people have got engaged with (quelle surprise) so now I'm going to turn off comments (at least until I can find a good way to moderate them) but I need to delete the existing spam.
I've tried:
using the admin UI but it times out with "Could not delete comment: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. The statement has been terminated."
writing my own Q&D aspx page to cycle through all the comments but it suffers a similar fate. "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding"
I then turned to the database itself and from inspection it seems that the be_PostComment table is the place to go so I issued a blanket delete statement there and it deleted all the rows.
However they're still in shown in the UI - is this down to caching by ASP.NET?

It must have been ASP.NET caching, as when I wrote the initial question I had tried (CTRL+) F5 to no effect. I cycled the site and the app pool from IIS and I'm now happy to report that all spam comments are gone.

Related

Why is my server response so high (First Byte Time)?

One of my websites has suddenly become extremely slow in server response - currently 41859 ms First Byte Time.
Already talked to hosting provider, but they did not notice any changes or problems with the server. So I am lost on what is the problem. Already tried plugins for caching and image optimization but I know that won't help.
Does anyone know what might be the problem? I've also run a security check but nothing was found, in case it's a virus.
Any ideas?

G Suite Email Migration Does Not Complete, Stuck on 99%

I'm currently experiencing something rather weird: while migrating emails from a GoDaddy email server to a new G Suite set up for a number of users, I was able to successfully move for a couple of emails, as confirmed by Google's 'Complete' tick beside them. I was able to observe the migration too as it went on.
However, for one of the emails, the number of emails read just seems to keep increasing, and it still hasn't displayed 'Complete', but remains stuck on '99%'.
See screenshots I literally took just now below: as of the first latest screenshot, it says 'Successfully migrated 3230 emails', while stuck on 99%:
Then I hit refresh, check the status of that same account, and now it says '...3250 emails', while still stuck on 99%:
This isn't how it's supposed to behave, at least that isn't the behaviour I experienced with the previous 4 emails in that list. Ideally, it should say 'Migrating X out of fixed_amount emails'. In this case, that fixed_amount was
about 2,000 emails. It has now since passed that figure, but instead of showing 'Complete', it instead shows 'Successfully migrated new_amount' where new_amount keeps increasing.
This has been ongoing for almost 24 hours now. Honestly, I don't know if this is a bug or not. I really just need some helpful info to know if I should be concerned or not, perhaps maybe if someone else has run into this. Anyone?
Stumbled on to Google's documentation: https://support.google.com/a/answer/7032598?hl=en
To quote the 'Why does my migration look like it's stuck at 99%?' section:
You’ll see 99% when all email is migrated. After everything is
migrated, the data migration service applies any labels to the
migrated email, which can take time. When the labels are applied, you
should see that the migration is complete (100%).
You might also see this issue if the estimated number of emails to
migrate exceeds the actual number of messages. The migration will
report 99% until the migration completes. This process might take some
time.
You shouldn't be concerned. I was migrating around 29.000 emails from a personal gmail to Google Workspace gmail and the migration took 4 days (migrating only one user), from which the last 1.5 days the migration was "stuck" at 99%. No need to restart the migration, eventually it indeed finishes. I also got several error codes (e.g. 17009 - 'Generating an access token with the supplied credentials was unsuccessful...'), but none proved valid, I haven't actioned on them as, like in your case, I saw the number of migrated emails increasing.

BIRT sessions stay open in Vertica

I'm having enormous problems managing connections in Vertica when developing BIRT reports. The basic idea is that sessions never die, so I always hit the connection cap. This is, of course, a problem, because then you can't use the database at all unless you do a close_all_sessions() to nuke everyone.
This happens at just about every level of development there is. First, in Esproc, when you develop the underlying logic... if there's a bug in your program before the connection.close(), the connection stays open and Esproc opens up a new one next execution. This adds up REALLY quickly when you have a couple of users developing stuff on the network.
Next, in Eclipse it's the same thing. You open a report and Eclipse creates a dozen connections that'll stay as long as you keep Eclipse open. Then, when you run the report, it'll create another bunch of connection, totally ignoring the ones it already has... and if you have bugs in your report, the dozen extras won't close.
Then on our website, same thing... problem running the report, boom, connections won't close EVER. I've had sessions stay open for two weeks with absolutely no activity. They only disappeared when I restarted Tomcat.
I'm at my whit's end here. There doesn't seem to be ANY way to set a session timeout in Vertica and I don't even know where to even begin looking to solve these problems. Everywhere I could find, the connection timeout was set to 20 seconds... so I would expect a connection to disappear after reaching that time, but of course that's not the case.
I really have no idea what to do here... and I'm desperate for some help here. Can anyone give me a clue? I've been at this for two days now and my brain just can't take anymore.
You want to use a connection pool instead of a direct JDBC access, it will blow away connection issues on Tomcat and improve performances.
Visit this article for more informations.
Define the connection pool (CP) in [Tomcat home]/conf/server.xml
Link the CP to web applications in [Tomcat home]/conf/context.xml
Install Apache Probe or something similar on Tomcat, this will help to test if the CP is correctly defined.
in BIRT reports, use JNDI URL property to link a datasource to the CP
This will solve the problem for the website, but not for Eclipse designer though. Try to upgrade to the most recent birt & jdbc versions.

Why does my Github webhook keep timing out?

We couldn’t deliver this payload: Service Timeout
I was successfully sending webooks to my server 5 minutes ago, and now I just keep getting timeouts. I tried deleting the webook and re-adding it, changing the URL it points to, but nothing.
Am I flooding it with too many pushes, or is GitHub's webhook service just down?
It also turns out that GitHub has a 10-second timeout set on their webhooks. That is what I ran into. See the documentation here.
Unless there is some kind of error on the GitHub side (which doesn't seem to be the case at the moment, given their "System Status" history), you might check the program receiving the payload of that webhook.
See a similar problem in Supybot-plugins 225:
I contacted GitHub support and one of the employees has been troubleshooting this for me. Here is part of what he had to say about the issue:
I just tried making a request manually from one of our machines, and that went through with no error (see curl -v output below).
However, I did notice that it took extremely long for the request to be processed -- over 15 seconds (for 2 bytes of data).
Decoupling the listening and reception of the payload, from its proicessing, is generally the right approach, as I recommended ion "Perl Script slow over Tomcat 6.0 and generates service time out".
The first part should be as fast as possible.

Crystal Reports hanging

The company has recently implemented software not written by us. The software uses Crystal Reports and whenever somebody draws a particularly large report and close their browser before the report is finished loading, we cannot draw anymore reports. The only way to fix it is to reset iis which is obviously exceptionally bad practice.
Any ideas on how to overcome this?
Thanks
So if one person closes their browser prematurely, the app breaks for everyone? Can two people try loading one of these long-running reports at once? Are there multiple templates, and this only breaks one and leaves the others ok?
It sounds a bit like the app's implementation of Crystal is holding an exclusive lock on the original template, and so when the user quits prematurely the app doesn't release the template for other users to use.
If it's a SQL server it is pulling data from, you could kill the SPID on the SQL server, that may allow the CR process to exit more gracefully; if you're using IIS6, you could configure a worker process to cycle automatically after a fixed number of requests or a time frame. Creating multiple worker processes may help also.
I wonder why it is hanging though, will it succeed if you wait long enough for the prior query and the current one to finish?
Finding a way to speed up the query would be a good idea too; or have large reports run off-hours and delivered to the users.