G Suite Email Migration Does Not Complete, Stuck on 99% - email

I'm currently experiencing something rather weird: while migrating emails from a GoDaddy email server to a new G Suite set up for a number of users, I was able to successfully move for a couple of emails, as confirmed by Google's 'Complete' tick beside them. I was able to observe the migration too as it went on.
However, for one of the emails, the number of emails read just seems to keep increasing, and it still hasn't displayed 'Complete', but remains stuck on '99%'.
See screenshots I literally took just now below: as of the first latest screenshot, it says 'Successfully migrated 3230 emails', while stuck on 99%:
Then I hit refresh, check the status of that same account, and now it says '...3250 emails', while still stuck on 99%:
This isn't how it's supposed to behave, at least that isn't the behaviour I experienced with the previous 4 emails in that list. Ideally, it should say 'Migrating X out of fixed_amount emails'. In this case, that fixed_amount was
about 2,000 emails. It has now since passed that figure, but instead of showing 'Complete', it instead shows 'Successfully migrated new_amount' where new_amount keeps increasing.
This has been ongoing for almost 24 hours now. Honestly, I don't know if this is a bug or not. I really just need some helpful info to know if I should be concerned or not, perhaps maybe if someone else has run into this. Anyone?

Stumbled on to Google's documentation: https://support.google.com/a/answer/7032598?hl=en
To quote the 'Why does my migration look like it's stuck at 99%?' section:
You’ll see 99% when all email is migrated. After everything is
migrated, the data migration service applies any labels to the
migrated email, which can take time. When the labels are applied, you
should see that the migration is complete (100%).
You might also see this issue if the estimated number of emails to
migrate exceeds the actual number of messages. The migration will
report 99% until the migration completes. This process might take some
time.

You shouldn't be concerned. I was migrating around 29.000 emails from a personal gmail to Google Workspace gmail and the migration took 4 days (migrating only one user), from which the last 1.5 days the migration was "stuck" at 99%. No need to restart the migration, eventually it indeed finishes. I also got several error codes (e.g. 17009 - 'Generating an access token with the supplied credentials was unsuccessful...'), but none proved valid, I haven't actioned on them as, like in your case, I saw the number of migrated emails increasing.

Related

Have you ever experienced connection issues with Postgres database' based on just a db name?

I've been idly bashing away at an issue with Postgres for months now. I've a bit of software (custom in-house stuff) that on 24 out of 25 servers does a certain process absolutely fine, no issues what so ever.
On the 25th server though, the process wouldn't quite complete properly, it would fail at the final hurdle which was a simple date change.
It's been a back-burner type issue so I'd not committed much time to working it out until management started to get angsty so I spent most of yesterday bashing away at it.
Obvious checks were done first:
Postgres version (9.6)
Software version
Windows patches (Server 2019)
GPO's
NTFS permissions
etc
All checked out as matches across every server. Went through the Postgres and the in-house software logs at length, had one of the developers build a stand alone executable for the process with a ridiculous amount of logging, still no dice. No indicators. Procmon and Wireshark logs showed the same story, nothing clear at all as to what was going on.
So then we take a backup of the database, load it in with a different name for testing and start running the process to find that it now works fine on the cloned database. This leads us to thinking there's maybe a formatting issue of some kind in the database, conscious of the idea that doing the backup & restore would shake things around. So we go back to the live, back it up again - delete the DB from postgres and restore from the backup.
No dice. Still broken.
Cue some serious confusion. We've done essentially the same thing with cloning live to test and are still getting the same fault at the end of the process.
After some head scratching and more prodding around in the logs I hit upon an idea of doing a fresh backup of the live DB, deleting the database, restoring the backup with a different name and then pointing the live software install to the newly named live DB and testing the process again.
It works!
For clarity, the filenames are basic alpha only. Upper case and lower case. No numbers, no symbols. Less than 15 characters in length.
I'm at a loss as to why it's now working and I'd love to get some input from the community.

Authentication Fail with MongoDB Compass Community

I've just created a new MongoDB account and I'm now trying to connect the free cluster I created via MongoDB Compass Community application but I'm getting a 'Authentication Fail' error being displayed.
This is what I've checked so far:
From my MongoDB Clusters section when I clicked on the Connect (…) button which then gives you various options. From there, I selected 'Connect with MongoDB Compass' and copied the connection string.
This was detected as expected by the Compass and the information was filled automatically in all the relevant fields and I also filled the password by copy/pasting it into the relevant field. 100% sure it is correct.
I checked that the username used was indeed set up as an admin and it is.
I checked my Authentication database was correct and it is.
I've checked that my public IP was added to the whitelist and it is. The only thing I've noticed is that when I added my public IP address, it added a /32 at the end. Is that the port?
But I'm not quite sure what else to test for to resolve this problem.
Any suggestions?
Thanks.
I eventually found out what the problem was after speaking to someone from MongoDB support Team!
Everything was done correctly except for one thing. I was being impatient after changing my Cluster User's password. It can take up to 2 minutes for the system to be updated and therefore to allow Compass to access it.
Once I waited a couple of minutes, I was able to login as expected in Compass.
I still can't quite believe I wasted so much time on such a simple issue but the main thing is that it is resolved.
I did send them some feedback as a lot of things could have been done a lot better:
Highlight it better in their documentation i.e. red??
Make the "warning" message displayed on the webpage after updating the user details more obvious. It was right in my face and never spotted it appear or disappear as once I'd update the user detail on the website, I'd swap immediately to Compass to try to login. By the time, I'd be done, well over 2 minutes would elapsed and the message would be long gone, so not very useful the way it is currently done.
Instead of just saying: 'Authentication Fail', which is correct, the message could read differently when it knows the user is being updated i.e. 'Authentication Fail - Please try again in a few minutes as we're updating this user's details'... Something like this anyway.
So, remember to be patient when changing your user's details in MongoDB and if you are, then yes, you will have a database up and running in the cloud in 5 minutes or less! :)

Intermittent failure of onFormSubmit trigger

We are a small school district and I have written a simple form for teachers to report disciplinary problems. The destination spreadsheet has a script bound to it and an onFormSubmit trigger is set.
The script uses the form data using the event object e from the function onFormSubmit(), creates a report in Google Docs, send email notices to relevant people and does a few maintenance tasks. Link to code.
It's been working fine until recently when the trigger occasionally fails to fire. No error message; the form data is submitted. This morning, two teachers in separate incidents entered a form and the trigger failed; a third entered the form and it worked.
The form and sheet are 'owned' by the Dean of Students account and users access the form anonymously on the network (not in Google account). The form is set with VIEW privilege for anyone with a link; the sheet has no privileges, only owner.
I am completely stumped as to why this would work sometimes and not others. Clues?
I recently developed a Google Spreadsheet with multiple sheets that takes data from 4 different forms. When the form data comes in, I have scripts that run multiple calculations and formatting on that data, so the onFormSubmit trigger is crucial for me. I've come across the exact same problem you are having right now.
At first I thought there was something wrong with my code and I checked it over and over again. Then, I scoured the web for answers to why my triggers failed about 10 to 20% of the time. A lot of what I found on Google forums related to problems from Google's side. I found that tons of people faced the same problem I had, but none of them had a definitive fix.
My eventual solution was to create yet another trigger that fires every 8 hours. This runs everything my other triggers run yet again just to make sure everything I wanted to do has been done. Of course, that trigger could fail, too. But so far, I've been checking the sheet (new data comes in about every 6 hours or so) and I haven't had to fix any problems whereas before, I had to rerun scripts daily.
Maybe for your case, you can have a function that goes through and sends emails that previous faulty triggers failed to send. Then have this function fire every however many hours you prefer.
This is probably not a very satisfying answer but it's the best I can do. Good luck!

Deleting BlogEngine.NET comments from the database

I've searched through all the questions tagged blogengine.net and not found an answer here. I set up a site with BlogEngine.NET. At the time I never configured for spam comment purposes the spam settings (figuring that, dontcha know, I was going to write such earth-shatteringly good content that all the comments would be just affirmations of how great my content was, with the occasional pithy insight)
Turns out the blog is more of a notebook for myself rather than anything I people have got engaged with (quelle surprise) so now I'm going to turn off comments (at least until I can find a good way to moderate them) but I need to delete the existing spam.
I've tried:
using the admin UI but it times out with "Could not delete comment: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. The statement has been terminated."
writing my own Q&D aspx page to cycle through all the comments but it suffers a similar fate. "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding"
I then turned to the database itself and from inspection it seems that the be_PostComment table is the place to go so I issued a blanket delete statement there and it deleted all the rows.
However they're still in shown in the UI - is this down to caching by ASP.NET?
It must have been ASP.NET caching, as when I wrote the initial question I had tried (CTRL+) F5 to no effect. I cycled the site and the app pool from IIS and I'm now happy to report that all spam comments are gone.

What time should I build to production?

My users use the site pretty equally 24/7. Is there a meme for build timing?
International audience, single cluster of servers on eastern time, but gets hit well into the morning, by international clients.
1 db, several web servers, so if no db, simple, whenever.
But when the site has to come down, when would you, as a programmer be least mad to see SO be down for say 15 minutes.
If there's truly no good time from the users' perspective, then I'd suggest doing it when your team has the most time to recover from any build-related disaster.
Here's what I have done and its worked well for me:
Get a site traffic analysis tool
which will graph hourly user load
Select low-point in graph for doing
updates
If you're small, then yeah, find when your lowest usage period is, and do it then (for us personally, usually around 1AM-3AM PST is the lowest dip...but it never drops to 0 of course). Once you start growing to having a larger userbase, if you want people to take you seriously you'll need to design your application such that you can upgrade without downtime. This is not simple, and it often involves having multiple servers.
I've spent ages trying to get our application to this point, the best I've come up with so far is for a couple hours run both the old version and new version at the same time. Users logged in at the time of the switchover stay on the old version, until they log out. Next time they come in they go to the new version. Any users coming on after the switchover get sent straight to the new version. It's still not foolproof, but it's pretty good.
What kind of an application is it? Most sites that I use tend to update around 2AM or 3AM.
Use a second site, and hotswap as needed.
The issue with hot-swapping, is database would still be shared, and breaking changes would bring stand in down as well.
I guess you have to ask your clients.
In any case, there's the wee hours of the morning. If you're talking about a locally available website, I do not think users will mind if they get an "under maintenance" notice at 2 am in their time zone.
Depends on your location: 4AM East Coast/1AM West Coast is typlically the lightest time.
Pick a few times that you'd like to do it and offer them as choices to the decider-types. Whatever you do, put up a "down for routine maintenance" page while you deploy.
Check the time of least usage
Clone/copy/update latest production code to another directory
If there exists any database migrations to be done, perform any that are required, and non conflicting with the old code base
At time of least usage, move symlink to point to latest code
First use an analysis tool to try and determine your typically "light" traffic times. Depending on the site and your location in the world in comparison to most of your users, it could be 4am, it could be 1pm, who knows. Then, once you have a good timeframe nailed down, make sure to have your deployment process as automated as possible, so that it happens quickly to minimize the downtime of your site.