This is really weird since I didn't see any information about this and I still can't find anything. When I try to run more than 5 pipelines at once, the 6th one gets 'REJECTED' and does not start. It says that only 5 pipelines can be provisioned at the same time. Anyone knows anything about this maybe it is happening because I am using Developer version?
Related
I use Azure Data Sync to synchronize 2 Azure SQL Databases for 2 Tables, the synchronization works always good for 6-7 days , but I think every time when I make a new Deployment in the release pipeline (in azure devops) without changing anything in those 3 Tables the synchronization group stop working.
The Synchronisation group status is still showing "Good" when the Synchronisation doesn't work. I also tried to start the synchronization manually by clicking the button Sync, but nothing happens.
Now, I have always to delete the Sync Group when its not working and create a new one for the same the tables to make everything work again.
I would like to know why the Synchronization stops working.
Does anyone run into the same problem or know how to fix it?
I've got a database on Google SQL that is used by our application running on kubernetes in GKE.
The mysql instance is running on 5.6, and I need to update it to 5.7, so I tried using the new migration jobs.
I've set up the connection profile and all the required permissions for the source DB, then followed the instructions to make a continuous migration.
The Job says it's running, migrating the ~450GB database. After about a day, it's still running, the storage used seems to have stopped growing, and the replication delay is at 0. The source database is not currently in use (That's why I'm unsing it to try this out before doing the same with a more important db).
According to this, if the dump phase is done, I should be able to promote the instance, but the promote button remains greyed out, and there's no way to check the running state (it only says "running", and I don't see any way to check if it's dumping, on CDC, or anything else).
The documentation seems a bit lacking, and I couldn't find anything by googling around. Has anyone been using this?
In short, my questions are:
Why can't I promote the instance?
and how can I check in what phase is the migration?
Here's a screencap of my job:
link because SO doesn't let me embed images yet
Thanks.
p.d.: the tag that the documentation says should be used in stackoverflow is: google-cloud-database-migration-service, which is too long and stackoverflow doesn't allow, so I used google-cloud-sql instead :/
I am seeing an issue like this, but possibly more frustrating. After a week for a 2TB database, storage resets to near-zero and the full dump restarts, without any errors or indication of what happened.
I have been running daily Dataprep jobs and since the update last week, approximately half of my jobs are now hanging and not being published. They appear as jobs in progress although when I go to the actual job page, the job appears to be complete. There is no publishing action and the publishing target does not appear updated. Some jobs have now been going on for over 72 hours since Friday.
I've seen traces of other users having the same issue online but have not seen any sort of response or recognition from either Google or Trifacta.
I have tried restarting the jobs to no success and it appears that there is no way to cancel those hanging jobs because from Google's perspective, it seems as though the jobs were successful itself, just not published. This problem appears both on my jobs that publish to BigQuery as well as jobs that publish to Google Cloud Storage, as well as manual and scheduled jobs.
This may impact only jobs that have been pushed during the upgrade and should be rather cosmetic in nature. Please note that you won't get charged.
Did the exact same job work before with no changes? If so, please contact support and provide them as reference the successful and now failing job ID so it can be investigated further.
Cheers,
Sebastian
I have come acros the same problem! The output of the jobs is placed in a temp folder in cloudstorage with the output mostly consisting out of multiple files without headers....
It is also creating huge issues here. Instead of the normal output file, it places multiple parts of it in a temp folder without headers. The makes new scheduled jobs that rely on these outputs useless, because it does not load the new output.
If you manually merge the files in the temp folder and add headers (in case of csv) + place them in the correct folder, the output can be created manualy (for csv).
Also no response from Google yet.
We're seeing the exact same thing down to the destinations and job types . . . it's almost like Dataprep is losing track of the underlying DataFlow job and not finishing on its completion (that's why you see the temp files—that's the output, then Dataprep handles the formatting of the output file separately).
Someone was kind enough to already post this on the issue tracker, so please go star it and add any additional details that may be helpful to the Dataprep team:
https://issuetracker.google.com/issues/135865374
I'm currently experiencing something rather weird: while migrating emails from a GoDaddy email server to a new G Suite set up for a number of users, I was able to successfully move for a couple of emails, as confirmed by Google's 'Complete' tick beside them. I was able to observe the migration too as it went on.
However, for one of the emails, the number of emails read just seems to keep increasing, and it still hasn't displayed 'Complete', but remains stuck on '99%'.
See screenshots I literally took just now below: as of the first latest screenshot, it says 'Successfully migrated 3230 emails', while stuck on 99%:
Then I hit refresh, check the status of that same account, and now it says '...3250 emails', while still stuck on 99%:
This isn't how it's supposed to behave, at least that isn't the behaviour I experienced with the previous 4 emails in that list. Ideally, it should say 'Migrating X out of fixed_amount emails'. In this case, that fixed_amount was
about 2,000 emails. It has now since passed that figure, but instead of showing 'Complete', it instead shows 'Successfully migrated new_amount' where new_amount keeps increasing.
This has been ongoing for almost 24 hours now. Honestly, I don't know if this is a bug or not. I really just need some helpful info to know if I should be concerned or not, perhaps maybe if someone else has run into this. Anyone?
Stumbled on to Google's documentation: https://support.google.com/a/answer/7032598?hl=en
To quote the 'Why does my migration look like it's stuck at 99%?' section:
You’ll see 99% when all email is migrated. After everything is
migrated, the data migration service applies any labels to the
migrated email, which can take time. When the labels are applied, you
should see that the migration is complete (100%).
You might also see this issue if the estimated number of emails to
migrate exceeds the actual number of messages. The migration will
report 99% until the migration completes. This process might take some
time.
You shouldn't be concerned. I was migrating around 29.000 emails from a personal gmail to Google Workspace gmail and the migration took 4 days (migrating only one user), from which the last 1.5 days the migration was "stuck" at 99%. No need to restart the migration, eventually it indeed finishes. I also got several error codes (e.g. 17009 - 'Generating an access token with the supplied credentials was unsuccessful...'), but none proved valid, I haven't actioned on them as, like in your case, I saw the number of migrated emails increasing.
Building from VS to device works just fine. But building with Master settings and then it won't do. Since master is required for store submission then, downloading from store is getting the problem.
I have been trying to track the issue, and basically the input is just not working. If I perform a tap I get :
The thread 0x12a4 has exited with code 0 (0x0).
I think spatial mapping also gets a weird treatment which got me thinking it may be related to multithread (since the only correlation I can think of is the usage of multi-threading internally).
I thought it was only with one of our app first but then got told a second app totally different is also getting the same problem.
Anyone knowing what to do?
I posted on hololens forum about 3 weeks ago but no one has yet replied.
Contacting Microsoft is a basic waste of time as I will probably never get anyone with that kind of knowledge.
Any idea?
Issue seems to come from the multiple .rcs files I was keeping. Only holding to the latest one seems to allow Master build to work.
I once had this problem a long time ago, tbh I think this has something to do with the OS.
I started in a Win 10 Home pc, and I was getting the error. When I switched to another PC with Win 10 Enterprise, it was working fine. So I have been sticking with the Win 10 Enterprise pc ever since.