Docker - under Windows 10 Pro - Need to map volumes and have them work, not quietly fail - mongodb

I ran various containers on Two different Windows 10 Pro machines, and thought that I had the data drives mapped correctly, but now I'm finding out that it isn't writing the data there at all. One example was Mongo db, where I mapped /mongodb/database:/data/db I upgraded docker, and when it restarted mongodb.. POOF! no data, I thought that was weird and looked in /mongodb/database and the directory is empty. Thankfully, the app is still in the development phase, and not critical that the data was lost...
the line from the docker compose file:
volumes:
- /mongodb/database:/data/db
Different machine:
I installed Gogs/gogs image, mapping the data:
docker run --name=Gogs-Git -p 10022:22 -p 10080:3000 -v /var/gogs:/Docker/Gogs-GitServer/Data gogs/gogs
Seemed to work perfectly, so I was thinking everything was fine, I pushed a Repo up to it.. and today, I looked at \Docker\Gogs-gitserver\data and no files... so where did it write the data?
I also installed TeamCity, mapping that data.. nope, it has no logs, no data...
This feature seems to just not work at all. I found a reference from 2016 saying I need to look at the 'shared' tab (below general),and check C: to be shared, but well, no, that isn't a tab, so it isn't that.
There is no way someone would write a system that just quietly wrote the data some other place, or didn't bother actually mapping it without giving an error - that would be nuts.
So, there must be some other explanation... One of the machines has Hyper-V enabled in the BIOS, the other one doesn't even support it as far as I know.
I think some of the images are Linux, and some are Windows (TeamCity I'm pretty sure is)
OK, this is interesting... If I look at the volumes, and enter one that is in use, I get this:
The Target looks about like the right path, but I'm not sure about the /backup and the /data on the last two lines, if these are supposed to be directories under that, they don't exist, but if I click on the data tab, I can see the data, it is in Docker, hidden and not shared, in spite of there being a 'target' that points at the right directory... how to I get it to start writing this data correctly to that folder??

I've not confirmed this yet with the above configuration, but I found that for other containers, I needed to specify the path as 'c:/data/MongoDb/Database' when I created the container using that as the path, it worked and I have data there now. I just need to go back and fix all these VMs so they have their data correctly...

Related

MySQL workbench: multiple windows

MySQL workbench is a fantastic tool. However, I'm having a hard time figuring out how to create multiple windows. For example, in Sequel Pro (or TablePlus or really any other SQL client) I can have multiple windows:
Yes I know there are 'tabs' but those aren't quite the same thing. Is there a way to have multiple windows using MySQL Workbench?
It seems like from a few other threads this would need to be done manually via:
$ open -n -a MySQLWorkbench.app
MySQL Workbench is originally designed to be a single instance app only. On Windows this has been extended to allow multiple instances (there's a setting in the preferences) and you found a way to do this on macOS. However this bears some risks, because all instances share the same config and cache files and can write simultaneously to them, which is prone to file corruption. Also, any changes done to the configuration or connections end up in the same file, so the last change may override previously made changes in another instance.

How to 'sudo' without 'newgrp'

I have two CentOS 6.8 servers running on VirtualBoxes.
On one, I can login as a regular user then use "sudo" to run administrator commands. I just add "sudo" to the front and all works as expected.
On the other, I need to first run "newgrp wheel", otherwise it nags me that I'm not in the sudoers file. Once that's done, all is well.
As far as I can tell, both systems are otherwise identical. The username in both cases has a primary group of "users" and is also a member of "wheel" and "apache" groups. The "wheel" group, of course, has been given "ALL" access via "visudo".
The only difference, if it's important, is that the first one is a VM on Linux, and I access it via Putty. The nagging one is a VM on Windows, and I access it via the VirutalBox screen.
It's not a very big issue, but I like not needing the extra step. Does anyone know what is going on here?
Well it turns out the systems were not as identical as I thought. The "visudo" sudoers file on the nagging version had somehow been restored to its original version, which meant that the "%wheel" directive was commented out. I only discovered that in trying to add a 10 minute timeout.

Shared folder with vagrant cause invisible characters appending

I have a few invisible characters (�) that appear at the end of a javascript document that cause the "illegal character" error in FF or Chrome. I saw different topics about this error, but nothing works for me, and i can't see anything wrong in my document (displaying invisible characters, open it with a hexadecimal editor). This is just driving me crazy.
I use Vagrant with a nginx web server. The document looks clear in the server too (vi + :set list).
Plus, when I get back a clear document from my Git repository, everything works(normal). But each time I want to edit it (like create a new variable at the top of document), I got this error again.
If someone can helps me, thank you.
If you're using the VirtualBox provider, then VirtualBox shared folders are the default synced folder type. These synced folders use the VirtualBox shared folder system to sync file changes from the guest to the host and vice versa.
There is a VirtualBox bug related to sendfile which can result in corrupted or non-updating files. You should deactivate sendfile in any web servers you may be running.
In Nginx:
sendfile off;
In Apache:
EnableSendfile Off
See vagrant docs: http://docs.vagrantup.com/v2/synced-folders/virtualbox.html
It seems that you're using Vagrant.. please take a look at your _Vagrantfile and check the way files are written on the VM filesystem.
cat ~/.vagrant.d/boxes/[YOUR VM NAME]/include/_Vagrantfile
Maybe you are using config.vm.synced_folder try to use NFS:
config.vm.synced_folder "/home/myuser/shared", ".", :nfs => true

MongoDB replica set in Azure "Waiting for role to start... Calling OnRoleStart()"

I have a problem trying to implement a mongodb replica set as a worker role instance in Windows Azure. In the Windows Azure portal, one of the instances is shown as busy with the status:
Waiting for role to start... Calling OnRoleStart()
I have checked all the settings and everything seems to be ok, what could the problem be?
Denis Markelov's blog post helped me solve this problem. The solution is mainly his, however I had to take an extra step to get it to work and thought others might find it useful.
Solution from blog:
Windows Azure reuses virtual machines for roles, so after a fresh
deployment on a hard drive you can find files that were created during
previous sessions. If MongoDB was terminated improperly - there might
be a lock file ("persisted mutex" analogue), because of which MongoDB
refuses to start. It is located at the drive with a label
"WindowsAzureDrive" (say it is F:), at the path:
F:\data\mongod.lock
In the case of a production use this situation might require a
recovery procedures, but if you are just in the process of initial
setup - it is safe to remove this file, letting MongoDB to start
again.
I was having this problem and did as suggested, however I was still having the same problem. So I took a look at the log file at
C:\Resources\Directory\.MongoDB.WindowsAzure.MongoDBRole.MongodLogDir\mongod.txt
And saw that another file was also giving an error. In order to fix the problem, you also have to delete the file local.ns in the same directory as mongod.lock.

Need an opinion on a method for pull data from a file with Perl

I am having a conflict of ideas with a script I am working on. The conflict is I have to read a bunch of lines of code from a VMware file. As of now I just use SSH to probe every file for each virtual machine while the file stays on the server. The reason I am now thinking this is a problem is because I have 10 virtual machines and about 4 files that I probe for filepaths and such. This opens a new SSH channel every time I refer to the ssh object I have created using Net::OpenSSH. When all is said and done I have probably opened about 16-20 ssh objects. Would it just be easier in a lot of ways if I SCP'd the files over to the machine that needs to process them and then have most of the work done on the local side. The script I am making is a backup script for ESXi and it will end up storing the files anyway, the ones that I need to read from.
Any opinion would be most helpful.
If the VM's do the work locally, it's probably better in the long run.
In the short term, the ~equal amount of resources will be used, but if you were to migrate these instances to other hardware, then of course you'd see gains from the processing distribution.
Also from a maintenance perspective, it's probably more convenient for each VM to host the local process, since I'd imagine that if you need to tweak it for a specific box, it would make more sense to keep it there.
Aside from the scalability benefits, there isn't really any other pros/cons.