I've ran into some trouble with GCS buckets that were deleted and then re-created (with the same names). If that is relevant, they are 'domain buckets' for hosting static content. After creating the buckets anew, I can no longer change any permission on those buckets. I get some unhelpful error message but no chance to change the permissions. Is there some kind of period to elapse before this problem goes away? In case of one bucket it's been way more than 24 hours and still no change.
It seems I posted this at the cusp of the time limit. Took more than 24 hours for one bucket, less than 12 for others. Problem solved itself.
Related
I am working on a project for a client and a couple of weeks ago most of the content "disappeared".
Images and videos are routed through FileStack (a file processing service) but actually stored on Google Cloud Storage in one bucket.
On the day in question everything was working, and then everything stopped working. When we investigated it turned out that the bucket FileStack was pointing to was non-existent, so we created a new bucket with the same name and everything magically worked itself out.
Now my question is, where did all the files from the disappeared bucket go? Is it possible to get them back? Is it possible to figure out what happened?
I have extensively reviewed the audit log in the Activity tab and it shows zero activity for the bucket in question. Is there anywhere else we can investigate?
Can you please send email to gs-team#google.com, noting the bucket name and an example object name from that bucket, along with the last time you were successfully able to access that bucket/object? Doing it that way will avoid exposing these names on the public forum. Please mention my name in the message, so I will get it and can investigate.
Thanks,
Mike Schwartz
GCS Team
When an object is deleted, it's deleted from the system and there isn't any option to recover it [1]. You can prevent this behavior by using object versioning [2]. And to get a better overview of the activity in Cloud Storage you can enable the "Data Access logs" [3].
About the reason why the objects has disappeared, as a first workaround you can review if there's an Object Lifecycle enabled [4].
https://cloud.google.com/storage/docs/deleting-objects
https://cloud.google.com/storage/docs/object-versioning
https://cloud.google.com/storage/docs/audit-logs
https://cloud.google.com/storage/docs/lifecycle
I'm currently experiencing something rather weird: while migrating emails from a GoDaddy email server to a new G Suite set up for a number of users, I was able to successfully move for a couple of emails, as confirmed by Google's 'Complete' tick beside them. I was able to observe the migration too as it went on.
However, for one of the emails, the number of emails read just seems to keep increasing, and it still hasn't displayed 'Complete', but remains stuck on '99%'.
See screenshots I literally took just now below: as of the first latest screenshot, it says 'Successfully migrated 3230 emails', while stuck on 99%:
Then I hit refresh, check the status of that same account, and now it says '...3250 emails', while still stuck on 99%:
This isn't how it's supposed to behave, at least that isn't the behaviour I experienced with the previous 4 emails in that list. Ideally, it should say 'Migrating X out of fixed_amount emails'. In this case, that fixed_amount was
about 2,000 emails. It has now since passed that figure, but instead of showing 'Complete', it instead shows 'Successfully migrated new_amount' where new_amount keeps increasing.
This has been ongoing for almost 24 hours now. Honestly, I don't know if this is a bug or not. I really just need some helpful info to know if I should be concerned or not, perhaps maybe if someone else has run into this. Anyone?
Stumbled on to Google's documentation: https://support.google.com/a/answer/7032598?hl=en
To quote the 'Why does my migration look like it's stuck at 99%?' section:
You’ll see 99% when all email is migrated. After everything is
migrated, the data migration service applies any labels to the
migrated email, which can take time. When the labels are applied, you
should see that the migration is complete (100%).
You might also see this issue if the estimated number of emails to
migrate exceeds the actual number of messages. The migration will
report 99% until the migration completes. This process might take some
time.
You shouldn't be concerned. I was migrating around 29.000 emails from a personal gmail to Google Workspace gmail and the migration took 4 days (migrating only one user), from which the last 1.5 days the migration was "stuck" at 99%. No need to restart the migration, eventually it indeed finishes. I also got several error codes (e.g. 17009 - 'Generating an access token with the supplied credentials was unsuccessful...'), but none proved valid, I haven't actioned on them as, like in your case, I saw the number of migrated emails increasing.
I have followed the directions provided by Google to delete some old buckets in the in my account ... it is very straight forward process listed however after confirming the deletion to occur the "Preparing to Delete" pops up on the bottom left, but the system never deletes the files and bucket ?
I have posted this several times but no one have suggested a solution or a reason why the process does not work.
If you have a lot of files in your bucket, it might simply take a long time to perform the operation.
As a workaround to the UI being unclear, you can use gsutil to remove all files in a bucket, followed by the bucket itself, using gsutil rm -r gs://bucket.
In my experience, when the bucket has lots of objects, using the web interface or gsutil alone is not the best way.
What I did, was that I added a life cycle rule, to have Google delete all the object in the bucket.
Then, coming back after a day, the bucket can easily be deleted.
I have a mongodb database with several million users.
I wanted to free space and I created a bot to remove inactive users of more than 6 months.
I have been looking at the disk for several minutes
and I have seen that it varied but it will not release large space, not even 1 mb. That's weird.
I've read that "remove" does not actually delete the disc if it does not simply mark that it can be deleted or overwritten. It is true?
That seemed to make a lot of sense to me. So, I've looked for something that forces space to really free up...
I've applied repairDatabase() and I think I've done wrong.
Everything has been blocked!
I have tried the luck and I have restarted the server.
There is a MongoDB service working but its status is maintained in "Starting" (not Running).
I'm reading from other sites that repairDatabase() requires twice as much space as the original size of the database, it does not have it.
I do not know, what is doing, and this could in several hours, days ...
Is the database lost? I think I will stop all services and delete the database.
repairDatabase is similar to fsck. That is, it attempts to clean up the database of any corrupt documents which may be preventing MongoDB to start up. How it works in detail is different depending on your storage engine, but repairDatabase could potentially remove documents from the database.
The details of what the command does is outlined quite clearly (with all the warnings) in the MongoDB documentation page: https://docs.mongodb.com/manual/reference/command/repairDatabase/
I would suggest that next time it's better to read the official documentation first rather than reading what people said in forums. Second-hand information like these could be outdated, or just plain wrong.
Having said that, you should leave the process running until completion, and perform any troubleshooting if the database cannot be started. It may require 2x the disk space of your data, but it's also possible that the command just needs time to finish.
My users use the site pretty equally 24/7. Is there a meme for build timing?
International audience, single cluster of servers on eastern time, but gets hit well into the morning, by international clients.
1 db, several web servers, so if no db, simple, whenever.
But when the site has to come down, when would you, as a programmer be least mad to see SO be down for say 15 minutes.
If there's truly no good time from the users' perspective, then I'd suggest doing it when your team has the most time to recover from any build-related disaster.
Here's what I have done and its worked well for me:
Get a site traffic analysis tool
which will graph hourly user load
Select low-point in graph for doing
updates
If you're small, then yeah, find when your lowest usage period is, and do it then (for us personally, usually around 1AM-3AM PST is the lowest dip...but it never drops to 0 of course). Once you start growing to having a larger userbase, if you want people to take you seriously you'll need to design your application such that you can upgrade without downtime. This is not simple, and it often involves having multiple servers.
I've spent ages trying to get our application to this point, the best I've come up with so far is for a couple hours run both the old version and new version at the same time. Users logged in at the time of the switchover stay on the old version, until they log out. Next time they come in they go to the new version. Any users coming on after the switchover get sent straight to the new version. It's still not foolproof, but it's pretty good.
What kind of an application is it? Most sites that I use tend to update around 2AM or 3AM.
Use a second site, and hotswap as needed.
The issue with hot-swapping, is database would still be shared, and breaking changes would bring stand in down as well.
I guess you have to ask your clients.
In any case, there's the wee hours of the morning. If you're talking about a locally available website, I do not think users will mind if they get an "under maintenance" notice at 2 am in their time zone.
Depends on your location: 4AM East Coast/1AM West Coast is typlically the lightest time.
Pick a few times that you'd like to do it and offer them as choices to the decider-types. Whatever you do, put up a "down for routine maintenance" page while you deploy.
Check the time of least usage
Clone/copy/update latest production code to another directory
If there exists any database migrations to be done, perform any that are required, and non conflicting with the old code base
At time of least usage, move symlink to point to latest code
First use an analysis tool to try and determine your typically "light" traffic times. Depending on the site and your location in the world in comparison to most of your users, it could be 4am, it could be 1pm, who knows. Then, once you have a good timeframe nailed down, make sure to have your deployment process as automated as possible, so that it happens quickly to minimize the downtime of your site.