Deleted partitions, testdisk, mbr and gpt, no idea where to go now - mbr

I was doing some late night work on a hard disk, blowing away partitions on a drive, I accidentally selected my main drive which had a windows 10 install on it. Blew away the partitions with gparted using live disk.
I managed to recover the most of the partitions using testdisk but not the MSR (the hidden one or secure one) one, so the PC doesn't boot....
At this stage i'd like to copy my recovered partition, but using anything doesn't work, clonezilla is confused because somehow there is an MBR and GPT installed on it.
I do not know what i was actually using(mbr or gtp) i see a whole load of overlapped partitions on it looks like the disk was a mess. However the main partition is in tact and the data is all there, checked with testdisk.
At this stage i've deleted all the partitions except for one i wish to keep but clonezilla/gparted still can't clone my partition.
I have no idea how to move forward from this.
Ideally what i would have done is cloned the last partition, done a fresh install of Windows 10 recreating the correct partition table, partitions ect, and then cloned the original partition, the one with data on it, back onto of the freshly install windows 10 one and this process worked for me during my triple boots.
However at this stage cannot clone the partition and too scared to try any option of delete either the GPT or the MBR, in case of data loss.
Advice from anybody would be greatly appreciated, this is only my gaming pc, so we're not taking mountains of valuable data, but there was some old files i forgot to copy to the NAS. Not to mentioned i realised, i never set the backing up onto the NAS of this computer, i know i stuffed this part up, you've save me re-downloading the 600gb of games i had on there and my set up which i loved!
Thanks in advance and your time in reading this mess.

Anyway so i figured it out and this is how i did it.
Basically it was windows 10 that got install with gpt.
and testdisk recovers any way it can which was by installing a mbr table and finding the data.
So i launched gparted to find the info about my current disk, and the partition was visible under mbr, so that meant that gpt was safe to delete.
So did that, after which i could clonezilla the partition to a spare disk, saving all the data!
Fresh install of windows 10 to re-create the partitions, Recovery, MSR and data.
Then clonezilla the from the spare disk ontop of the new windows 10 install, all working perfect, as i'm typing this from that PC.
Thanks all for reading!

Related

How to Upgrade Synology 2-Bay NAS Storage to Larger Capacity?

First I'd like to apologize for the long read.
I hope you can help me with this one. I have a 2-bay NAS (DS218+). I initially have a single 4TB (WD Red NAS Drive) drive (SHR - without data protection) and multivolume support was flagged as No. I decided to upgrade my storage capacity so I bought a new 8TB WD Red NAS Drive.
I've been reading the knowledgebase but still reluctant to proceed as I'm still quite confused if I'm doing the correct procedure. What I want to achieve is to upgrade my storage without data loss to 8TB (without data protection in mind yet). Lets just say I want to replace my 4TB to 8TB like nothing happened.
The current version of my DSM is 6.2.3. In my 1st bay is the 4TB. I attached the 8TB to the 2nd bay and saw the drive recognized and "Not Initialized" as expected. Now, the videos I've seen always mention a Manage button, but in my case I can't find it. And it didn't ask me to repair anything nor there was a "Degrade" status.
I only see Add Drive when I go to Storage Pool -> Action however. I went with it and clicked Next, then had a pop up warning me that the data in the newly added drive will be erased, I clicked OK and this was displayed:
Now, the Total Capacity: 3.63TB is what got me bummed. I'm new to Raid stuff so I'm still quite confused. I'm hoping to get a new max capacity of 7.2TB from my new 8TB. How do I do this without losing data? And my system as-is?
I hope you can guide me. I'd really appreciate it.
Thank you so much in advance.
To have redundancy you cannot have more than what can be saved on at least 2 disks. Right now you have only ~4TB available on 2 disks. If you have another 8TB you will be able to use 100% of the space.
More information
https://www.synology.com/en-us/knowledgebase/DSM/tutorial/Storage/What_is_Synology_Hybrid_RAID_SHR

Application fails to read from intermediate topic after restart

We are using Kafka Streams in our application. The topology uses groupByKey, followed by windowing followed by aggregation.
Sometimes, after restart, the application fails to read from the intermediate .repartition topic, e.g, lag is growing bigger and bigger. Deleting the .repartition topic solves the problem till the next restart, but it is not a good solution. The application runs on docker with local storage mounted as state directory.
Seems like without docker, everything is OK. Please, advise!
Thanks, Mark.
Someone experiencing a similar issue was able to resolve it by setting metadata.max.age.ms to a lower value than the current default (300000) -- try setting it quite low (eg few hundred ms) to see if that helps, then work out a reasonable value to run with.

Possible big mistake. What exactly does "db.repairDatabase()" do? MONGODB

I have a mongodb database with several million users.
I wanted to free space and I created a bot to remove inactive users of more than 6 months.
I have been looking at the disk for several minutes
and I have seen that it varied but it will not release large space, not even 1 mb. That's weird.
I've read that "remove" does not actually delete the disc if it does not simply mark that it can be deleted or overwritten. It is true?
That seemed to make a lot of sense to me. So, I've looked for something that forces space to really free up...
I've applied repairDatabase() and I think I've done wrong.
Everything has been blocked!
I have tried the luck and I have restarted the server.
There is a MongoDB service working but its status is maintained in "Starting" (not Running).
I'm reading from other sites that repairDatabase() requires twice as much space as the original size of the database, it does not have it.
I do not know, what is doing, and this could in several hours, days ...
Is the database lost? I think I will stop all services and delete the database.
repairDatabase is similar to fsck. That is, it attempts to clean up the database of any corrupt documents which may be preventing MongoDB to start up. How it works in detail is different depending on your storage engine, but repairDatabase could potentially remove documents from the database.
The details of what the command does is outlined quite clearly (with all the warnings) in the MongoDB documentation page: https://docs.mongodb.com/manual/reference/command/repairDatabase/
I would suggest that next time it's better to read the official documentation first rather than reading what people said in forums. Second-hand information like these could be outdated, or just plain wrong.
Having said that, you should leave the process running until completion, and perform any troubleshooting if the database cannot be started. It may require 2x the disk space of your data, but it's also possible that the command just needs time to finish.

Avoiding unexistent metadata in Perforce Server

My question might be simple, and the solution as well, however, i want to know, supposing that a user syncs a branch, and later delete the physical files from his local machine manually, the metadata about these files wil still exist in the server...
In the long run i'm afraid this could slow down the server.
I haven't found much about this issue, this is why i'm asking here, how do companies usually manage their Perforce metadata? A trigger that verifies the existing metadatas? a program that runs sync #none for client directories that does not exist anymore from time to time?
As i said, there might be many simple ways to solve that, but i'm looking for the best one.
Any help is appreciated.
In practice I don't think you'll have too much to worry about.
That being said, if you want to keep the workspace metadata size to a minimum, there are two things you'll need to do:
You'll need to write the sync #none script you referenced above, and also make sure to delete any workspaces that are no longer in use.
Create a checkpoint, and recreate the metadata from that checkpoint. When the metadata is recreated, that should remove any data from deleted clients. My understanding of the Perforce metadata is that it won't shrink unless it's being recreated from a checkpoint.

Is running postgresql in memory a good idea?

Recently we are working on migrate our software from general PC server to a kind of embedded system which use Disk on module (DOM) instead of hard disk drive.
My colleague insist that as DOM could only support about 1 million times of write operation, we should running our database entirely in a RAM disk and backup the database to DOM.
There 3 ways to trigger the backup :
User trigger
Every 30 minutes
Every time when there is some add/update/delete operation in database
As we expecte that user will only modify the database when system is installed, I think maybe postgresql would not write that often.
But I don't know much about postgresql, I can not judge if it worth all this trouble and which approach is better.
What do you think about it?
The problem of wearing out SSDs can be alleviated by whatever firmware the SSD has. Sometimes those chipsets don't do it well, or leave the responsibility to someone else. In this case, you can use a filesystem designed to do wear levelling by itself. UBIFS or LogFS are suitable filesystems.
Assuming that the claim about the DOM write cycles is true, which I can't comment on, then this won't work very well. PostgreSQL assumes that it can write whatever it wants whenever it wants (even if no logical updates are happening), and you have no real chance of making it go along with the 3 triggers that you mention.
What you could do is have the entire thing run on a RAM disk and have some operating system process back this up atomically to permanent storage. This needs careful file system and kernel support. This could work if your device is on most of the time, but probably not so well if it's the sort of thing that you switch on and off like a TV, because the recovery times could be annoying.
Alternatives are using either a more embedded-like RDBMS such as SQLite, or using a storage system that can handle PostgreSQL, like the recent solid state drives, although some SSDs have bogus cache settings that might make them unsuitable for PostgreSQL.