Hello a few days ago I wanted to rename "ProcessorName" in
HKEY_LOCAL_MACHINE \ HARDWARE \ DESCRIPTION \ System \ CentralProcessor \ 0.
I succeeded, but every time the system was rebooted, windows reverted the changes I made.
With SubACL, I changed the owner of the registry keys to the administrator group, but it still didn't work.
I also tried to put a .reg file to
C:\Users\%username%\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup
But I dont want this, as I'm creating a program that allows renaming your CPU name.
I guess if I make this key is "read only", the system cannot change it, it just reads it.
How can I make that happen with cmd, powershell, etc. (without opening regedit)?
I don't think you can ever get a change to survive a reboot by design. As far as I'm aware, HKLM\HARDWARE is volatile data. My understanding is that the entire key is deleted and recreated at each boot by ntdetect.com scanning the system for hardware. So I don't believe that it's going to matter if you make it read only.
Even if my understanding above is incorrect, ntdetect.com is run by the NT bootloader (NTLDR) before execution has been passed off to the NT kernel (ntoskrnl.exe). Hardware detection runs before the NT kernel has loaded because the kernel needs the list of installed hardware before it can load. I would be surprised if it would respect a write-deny ACLs. I think this is before the security system is even available.
All I can think you could do would be to create a script or program that executes at startup or login to rename the value, but I guess I don't see the purpose in that.
Related
I'm trying to deploy images via MDT that have been upgraded via the MDT "Standard Client Upgrade" task sequence. My images started as Win10 v1607 images and are updated to v1703 and then captured.
When I go to deploy the captured images, I'll get a popup on first login that c:\LTIBootstrap.vbs can't be found. Digging, I discovered that after the OS is installed and the PC restarts, the MDT task sequence continues running as the SYSTEM account . This is bizarre as it typically runs as the built-in Administrator account.
For some reason, even though the unattend.xml file contains the usual AutoAdminLogon entries, a registry key at
HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon\SystemAutoLogon
is being created and set to 1 during the deployment. (I discovered this by comparing the registries at the end of deployment.) This key is not present in the captured image. This key does not get created if I deploy an image that is manually updated to v1703 (via Windows Update instead of MDT).
Any ideas on why the unattend.xml could be ignored or what would cause SystemAutoLogon to get created and set?
I figured out what was going on.
The MDT Upgrade task sequence invokes the upgrade with the command line /postoobe option pointing to setupcomplete.cmd. This causes the file to be copied to c:\windows\setup\scripts\setupcomplete.cmd. When windows install is complete, if a file is present at that location, it is run under the SYSTEM account.
The problem is that this file remains even after the upgrade task sequence is totally complete. So if you then capture the image and deploy it to a real machine, it will see setupcomplete.cmd and run it after the deploy, instead of using the usual default Administrator account.
I imagine the presence of this file at c:\windows... is what causes the registry changes mentioned above. setupcomplete.cmd is only built to bootstrap an upgrade back into the MDT task sequence, and needs to be removed from c:\windows... when the task sequence is done running.
Knowing that the post-upgrade portion of the upgrade task-sequence runs as SYSTEM instead of Administrator via a very different mechanism than standard deployment is important, as there are then limits to what you can do. By default the sequence lets you install applications.. they need to be apps that are ok being installed by SYSTEM.
For now I've updated my local SetupComplete.cmd in my scripts directory to delete itself when it is done by changing the last for loop to this (there was also a typo in the for loop before preventing the exit echo):
for %%d in (c d e f g h i j k l m n o p q r s t u v w x y z) do if exist %%d:\Windows\Setup\Scripts\setupcomplete.cmd (
del /q /f %%d:\Windows\Setup\Scripts\setupcomplete.cmd
echo %DATE%-%TIME% Exiting SetupComplete.cmd >> %WINDIR%\Temp\setupcomplete.log)
After thinking about this more and hitting issues due to running as the SYSTEM account, I started playing with avoiding running as the SYSTEM account. (One big problem is that if you want to shutdown at the end of the task sequence right after a reboot occurs, SYSTEM starts running too fast, and the call to shutdown in MDT fails.)
The idea is to instead use SetupComplete.cmd running as SYSTEM to simply bootstrap back into running the task sequence as the default Administrator.
There are a few wrinkles to implementing this. Namely, the synchronous commands that run from unattend.xml during a normal install do not run, so things like enabling admin, disabling uac for admin, disable user account page, disable async run once all have to be invoked manually. Beyond that, it is just a matter of setting the right registry entries by calls to PopulateAutoAdminLogon and SetStartMDT via a step in the task sequence after the OS upgrade is complete, and then performing a restart. This seems to work pretty well. The ideal way to do this would be to have the same script that calls PopulateAutoAdminLogon/SetStartMDT also parse unattend.xml and run those commands.
For some reason shell hiding does not work even though everything is set for it. My best guess is that the task sequence runner is doing this because IsOSUpgrade is set, but am not sure.
With this approach, SetupComplete.cmd is just responsible for a single bootstrap back into the task sequence, and the task sequence can delete it at the same time that it calls a script to do PopulateAutoAdminLogon/SetStartMDT
There is enough work to be done to fully polish this approach that I'll just workaround the one autologin issue for now, but it really does feel like a better way for MDT to work when doing upgrades. Hopefully they'll flesh it out in the future.
I have two CentOS 6.8 servers running on VirtualBoxes.
On one, I can login as a regular user then use "sudo" to run administrator commands. I just add "sudo" to the front and all works as expected.
On the other, I need to first run "newgrp wheel", otherwise it nags me that I'm not in the sudoers file. Once that's done, all is well.
As far as I can tell, both systems are otherwise identical. The username in both cases has a primary group of "users" and is also a member of "wheel" and "apache" groups. The "wheel" group, of course, has been given "ALL" access via "visudo".
The only difference, if it's important, is that the first one is a VM on Linux, and I access it via Putty. The nagging one is a VM on Windows, and I access it via the VirutalBox screen.
It's not a very big issue, but I like not needing the extra step. Does anyone know what is going on here?
Well it turns out the systems were not as identical as I thought. The "visudo" sudoers file on the nagging version had somehow been restored to its original version, which meant that the "%wheel" directive was commented out. I only discovered that in trying to add a 10 minute timeout.
I'm considering using shims to get around a game demanding Admin privileges (I tried editing the embedded "requestedExecutionLevel" tag with Resource Hacker and using .manifest files, but discovered the launcher software always downloads a new version of itself before running, thereby overwriting "asInvoker" with "requireAdministrator"). If I write protect the exe it exits with an error.
I understand that the shim required to spoof Admin privileges will probably add no appreciable overhead in itself; but MicroSoft's Application Compatibility Toolkit (ACT) that you need to install to enable shims uses a database to keep track of which application requires which shim. I'm sure this could be done with little overhead; but having seen MS' (and other corporates') past bloatware, I'm concerned my entire system will be slowed down if I install it.
Does anyone have DIRECT experience of installing ACT and KNOWS whether it slows the system down generally?
I've discovered you can add RUNASINVOKER as the value of a STRING key given the name of the application's full path here:
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers
and it will do the job without you having to install Microsoft's ACT package.
Example: if you had an application called Smeagol.exe in the directory c:\LordOfTheRings, then create a STRING key called:
c:\LordOfTheRings\Smeagol.exe
in
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers
and give it the value of
^ RUNASINVOKER
and it will run without requesting Admin privileges.
I am using Solaris 10.
I have another user apart from root say testuser, which is mounted in NAS file system
I have some script which need to be run as testuser. so I had added them to the crontab of testuser.
As long as NAS is up all the cronjobs are rqn properly, but when NAS goes down then cron itself crashed by giving ! could not obtain latest contract for PID 15621: No such process
this error.
I search for this issue and came to know that because it's .profile file is not accessible due to which it is giving this error. So is there any way by which we can check user specific .profile file exist or not before run any schedule job
Any help on this will be appreciated.
I think a better solution would be to actively monitor the NAS share, and report an error (however errors are reported at your location) if it isn't. You can use tools like nfsstat to get statistics on the NAS share (assuming this NAS share is mounted via NFS). It seems a better solution than checking to see if it's working before running cron -- check to make sure the share is available, because if it isn't, attention is needed.
Cron doesn't depend on anything but time, so it will run regardless of whether or not the user's home directory is available. If the script that the cron job is running is local, then you could prepend a check to make sure the home directory is available before running, otherwise just exit with an error code.
If the script that cron is attempting to run is in the user's home directory, you're out of luck, because an error will occur in even trying to run the script to check for the existence. You will need to check the status of the NAS share before attempting to run the cron job, but the cron job will run regardless. See where I'm going?
Again, I would suggest monitoring the NAS and reporting when it is failing.
I am having a conflict of ideas with a script I am working on. The conflict is I have to read a bunch of lines of code from a VMware file. As of now I just use SSH to probe every file for each virtual machine while the file stays on the server. The reason I am now thinking this is a problem is because I have 10 virtual machines and about 4 files that I probe for filepaths and such. This opens a new SSH channel every time I refer to the ssh object I have created using Net::OpenSSH. When all is said and done I have probably opened about 16-20 ssh objects. Would it just be easier in a lot of ways if I SCP'd the files over to the machine that needs to process them and then have most of the work done on the local side. The script I am making is a backup script for ESXi and it will end up storing the files anyway, the ones that I need to read from.
Any opinion would be most helpful.
If the VM's do the work locally, it's probably better in the long run.
In the short term, the ~equal amount of resources will be used, but if you were to migrate these instances to other hardware, then of course you'd see gains from the processing distribution.
Also from a maintenance perspective, it's probably more convenient for each VM to host the local process, since I'd imagine that if you need to tweak it for a specific box, it would make more sense to keep it there.
Aside from the scalability benefits, there isn't really any other pros/cons.