vncserver has wrong hostname - windows-7-x64

I had to change the name of the windows 7 system. Unaccountably, vncserver is still using the old computer name. This was RealVnc free version. I re-installed but it is still using the old computer name.
I had a Z400 motherboard go bad and it took the disk drive. I replaced the motherboard, $39 was cheap, and I cloned one of my other Z400 workstation C drives using Acronis. I booted the replacement motherboard with the cloned copy, changed its name to the old defective one, and activated windows. When It rebooted, vncserver still had the old computer name and I cannot get rid of it and it is conflicting with the vncserver on the other Z400 since they both use the same name. There is no option in the server to use a different name that I can find anywhere.
IPs are different and all system behave fine. I can ping and even access shares using their names. The problem system clearly shows the correct name but, unaccountably, vncserver is using the wrong name.
This system will be upgraded to 10 in a few days, maybe the problem will go away when that happens.

Solved! First I had to log in and reduce the number of clients to under 5 as that was my limit. Then I had to remove the problem system's leftover name. This was all done at realvnc web site. Once under 5 systems then I could add the problem one and once it connected to realvnc's cloud then it got the correct name. This was an artifact of having more then 5 systems when I was only licensed for 5. The "6th" one worked locally as its setup was still valid, but it was refused connection to the cloud so it never got to change its name. It "worked" until I did a flushdns and its old setup was no longer valid.

Related

Docker - under Windows 10 Pro - Need to map volumes and have them work, not quietly fail

I ran various containers on Two different Windows 10 Pro machines, and thought that I had the data drives mapped correctly, but now I'm finding out that it isn't writing the data there at all. One example was Mongo db, where I mapped /mongodb/database:/data/db I upgraded docker, and when it restarted mongodb.. POOF! no data, I thought that was weird and looked in /mongodb/database and the directory is empty. Thankfully, the app is still in the development phase, and not critical that the data was lost...
the line from the docker compose file:
volumes:
- /mongodb/database:/data/db
Different machine:
I installed Gogs/gogs image, mapping the data:
docker run --name=Gogs-Git -p 10022:22 -p 10080:3000 -v /var/gogs:/Docker/Gogs-GitServer/Data gogs/gogs
Seemed to work perfectly, so I was thinking everything was fine, I pushed a Repo up to it.. and today, I looked at \Docker\Gogs-gitserver\data and no files... so where did it write the data?
I also installed TeamCity, mapping that data.. nope, it has no logs, no data...
This feature seems to just not work at all. I found a reference from 2016 saying I need to look at the 'shared' tab (below general),and check C: to be shared, but well, no, that isn't a tab, so it isn't that.
There is no way someone would write a system that just quietly wrote the data some other place, or didn't bother actually mapping it without giving an error - that would be nuts.
So, there must be some other explanation... One of the machines has Hyper-V enabled in the BIOS, the other one doesn't even support it as far as I know.
I think some of the images are Linux, and some are Windows (TeamCity I'm pretty sure is)
OK, this is interesting... If I look at the volumes, and enter one that is in use, I get this:
The Target looks about like the right path, but I'm not sure about the /backup and the /data on the last two lines, if these are supposed to be directories under that, they don't exist, but if I click on the data tab, I can see the data, it is in Docker, hidden and not shared, in spite of there being a 'target' that points at the right directory... how to I get it to start writing this data correctly to that folder??
I've not confirmed this yet with the above configuration, but I found that for other containers, I needed to specify the path as 'c:/data/MongoDb/Database' when I created the container using that as the path, it worked and I have data there now. I just need to go back and fix all these VMs so they have their data correctly...

SqlBase Database on One Drive

I have a database on "Microsoft OneDrive", I have 4 valid licenses from Gupta 4 SqlBase. When I try to run from PC 1 I can access the database, but when I try the same from PC 2 I got this
Reason: Attempting to open an existing file and a failure has occurred.
Remedy: Determine and correct the cause of the open file failure.
Verify that the specified file exists. Verify the number of
files allowed open for the operating system permits the
additional file, that is, check the FILES= configuration
parameter setting.
I assume this is related to the LOG files on the database and some settings in the Sql.Ini, but I'm not able to find where/how???
The intention is to run the database on "OneDrive", buy SqlBase licenses and run a multi user system. The application has been made as such.
Where do I think wrong?
Where do I do wrong?
What setting are missing?
Thanks
That won't work.
SqlBase (and all other RDBMS) are built to manage one databasefile + logfiles.
When multiple instances work with more or less replicated datafiles this ends up in a clash.
There are systems in the world which can work as a distributed cluster (e.g. like the document-database RavenDB) but they are built to work like this (not with OneDrive of course but with their own replication mechanism). Sqlbase is not.

How to 'sudo' without 'newgrp'

I have two CentOS 6.8 servers running on VirtualBoxes.
On one, I can login as a regular user then use "sudo" to run administrator commands. I just add "sudo" to the front and all works as expected.
On the other, I need to first run "newgrp wheel", otherwise it nags me that I'm not in the sudoers file. Once that's done, all is well.
As far as I can tell, both systems are otherwise identical. The username in both cases has a primary group of "users" and is also a member of "wheel" and "apache" groups. The "wheel" group, of course, has been given "ALL" access via "visudo".
The only difference, if it's important, is that the first one is a VM on Linux, and I access it via Putty. The nagging one is a VM on Windows, and I access it via the VirutalBox screen.
It's not a very big issue, but I like not needing the extra step. Does anyone know what is going on here?
Well it turns out the systems were not as identical as I thought. The "visudo" sudoers file on the nagging version had somehow been restored to its original version, which meant that the "%wheel" directive was commented out. I only discovered that in trying to add a 10 minute timeout.

Is WinDbg's vertarget command always accurate?

I wonder because running it on a client's minidump it reports a different Windows version than the client repeatedly told me she had, and the version I'm being reported happens to be exactly the same version I'm running WinDbg on.
So I wonder, can vertarget always be trusted (and clients not) or the information it relies on may be absent with some dump generation options and when it is it reports the version WinDbg is currently running on, or maybe just some default that happens to coincide with my OS version?
I'm using WinDbg 6.12.
In all my cases so far, vertarget has been correct and the customer/client made a mistake - and vertarget is one of the commands I use for every dump, exactly for the purpose of checking if the dump contains what I need.
But perhaps, things can potentially go wrong here as well, so let's evaluate some options:
vertarget also reports debug session time and system uptime. Do those also match your system? Reboot your system in order to get a low system uptime and check again. Is it still your PC's uptime?
vertarget also reports the number of CPUs. Does that number match your number?
Get a virtual machine which does not have your OS, e.g. one from Modern.IE (Microsoft). Copy WinDbg and the dump to the VM and check the output of vertarget again.
WinDbg 6.12 is a bit old. Do newer versions (6.2.9200 / 6.3.9600 or even 10.0) provide the same information or was there a bug fixed already?
And even check some other information:
Is it a dump of the correct application? Use | (pipe)
Is it a dump of the version you are expecting? Use lm vm <exename>
Does it have the flags which can be expected for the method used for taking the dump? Use .dumpdebug.
Other than that I observe (not representative) that many client OS version dumps (Windows 7, 8, 8.1) have all latest service packs installed, while administrators seem to follow the "never change a running system" approach for server OS (Windows Server 2012, R2). So it might just be a coincident.

Has anyone encountered "Win32 Error : The network path was not found" trying to copy files with FinalBuilder 6?

I have a FinalBuilder job that, as a final step, deploys the compiled app and DLLs to a network share on another server.
About 50% of the time, it just fails with
Win32 Error : The network path was not found
Changing the target from \\myserver\myshare to \\myserver.mydomain.com\myshare will often fix it temporarily - the first 2-3 runs after modifying the build file will work, after which it'll start failing again.
The FinalBuilder task is running with domain credentials granting admin access on the target box; and copying files to/from shares on that server via Windows Explorer works reliably.
I'm completely stumped.
Finally tracked this down. The target server was a virtual machine, and the Hyper-V host network settings were set to "Virtual Network" instead of "Virtual Teamed Network"
I have no idea what that means, but having changed it to Virtual Teamed Network, it works flawlessly. O_o
The network path was not found.
This is related to DNS/WINS not being able to look up the name. When I have seen this there are problems with our DNS servers.
Adding an Entry into the lmhost file would prevent the system from looking in DNS/WINS.
If that does not work, another option to consider is to increase the number of retries on the Action. This can be done from the "Runtime" tab of the action by clicking on "Timing Properties"