How to push an enterprise-wide PowerShell profile customization? - powershell

As I've explained in my other question, I'm busy setting up a PowerShell module repository in my enterprise.
My plan is to have a master repository (r/w access to a limited group of people) and slave repositories (read only access to everyone). I need multiple repositories because clients are located in different security zones and I can't have a central location reachable by all clients.
For this reason, I need to configure the PowerShell profile of the clients so that they can point to the correct repository to find the modules. I would like to define a $PowerShellRepositoryPath environment variable for this purpose.
Also, the profile needs to be customized in order for it to execute a script located in the repository (thus where $PowerShellRepositoryPath points to) when PowerShell starts (my goal here is to automatically add the latest module versions to the PSModulePath of the clients on startup).
We have a mixed environment with domain members and stand-alone servers in different network zones.
How would you proceed? Is it possible to push that variable and the profile via a GPO for domain members? Would customizing the $Profile variable via GPO be an option?
What about the standalone servers?
Edit:
I think that for creating the environment variable, I'll just use a GPO to create it and use it in PowerShell via $env:variableName. For non-domain situations, I'll probably have to use a script though..

I am not sure about pushing $profile via GPO. But, I'd simply put a logon script that copies the profile script from a network location based on the user's group/security membership.

Well if you're going to change the path to the modules, I'd have a file in the repository (say current.txt) that has the name for the current module (or current file path, whichever you are changing) in it. Then have the $profile script read the content of that file, and set the variable based on the contents. This way you don't have to screw around with updating the profile scripts, just update the central repository current.txt file with the path (or partial path, the part that changes, or filename or whatever), and when it replicates to the client repositories, all powershell profiles get updated with the latest modules when the profile script is executed.
Out of curiosity, why not just overwrite the module files in the client repositories with the latest version? If you did it that way, all clients would always have the latest versions, and you wouldn't have to update the $profile scripts.
Alternately you could always write another script to replace the $profile script on all machines. I think the first route I suggested would be the cleanest way of doing what you are after.
As far as the GPO thing goes, I don't believe you can do this. There is no GPO defined to control what is in the profile script. I would say you could maybe do it with a custom ADM file, but the profile script path is not controlled by the registry, so no go there.

Related

Is there a way to automatically update windows 10 OS for computers in domain on group policy?

I want to be able to push a new group policy out with a powershell script (or scripts most likely) that will make all computers on our active domain update to the windows OS that we want. Currently there are hundreds of users and we don't have a way to update their computers other than doing it via remote desktop for each computer individually. But every computer has the .exe file required to update, just hasn't been run yet. Something like
wuauclt.exe /updatenow
I am also open to other suggestion on how to do this. I was thinking of sending all the users a batch file and having them run that and they could do it themselves. Any help would be appreciated and if this post wasn't specific enough I can answer questions or take it down. Thanks!
Never and I mean NEVER let user deploy updates on his/her computer themselves by clicking on some batch or exe file. Two reasons:
It will just not work and big part of machines will not be updated.
You are teaching users that they can run various and unknown batch files / powershell scripts / exe files, because it's safe.
Since you said "hundreds of users" I believe that you have some domain there.
What you might be looking for are the Group Policies (https://learn.microsoft.com/en-us/windows/deployment/update/waas-wufb-group-policy) or WSUS (https://learn.microsoft.com/en-us/windows/deployment/update/waas-manage-updates-wsus).

Setting up VM in Azure from scratch. How to copy files to VM drive preferably by DSC?

I have a buch of tools that I copy to destination machine in Azure every time I create a new one. How I do it now
zip folder with tools
open powershell session
use Copy-Item -toSession
unzip
This somewhat works. However, it's not ideal - e.g. update of one tool is not as easy as it should be.
I would like to add this to PowerShell DSC configuration. Tried to find something like that and every File resource I found so far uses network shares.
Q: Is there any oficial way how to achieve the same result?
Q: If not, any sensible way how to achieve this? DSC was my first choice, but is not mandatory.
I find this as basic requirement a would expect that this will be one of scenarios that people try to solve.
Note1: I use DSC in push mode.
Note2: We were trying ansible to cover whole process (VM creation, LB, NSG, VPN, ..., VM setp - registry, FW, ..)), but found out that not everything in Azure is possible with ansible (IIRC gateways, vpns, ..)

How to use Powershell to script a domain user's temporary file location

I am writing deployment scripts using Powershell to install Scheduled Tasks, Windows Services and IIS App pools.
Each of these items will be run under the identity of an Active Directory domain user. My issue is that the business rules enforced on the servers state that no process or user can write to the C drive.
Therefore i need to direct each installed object to use the E drive for temporary storage of any kind.
How can i assign the temp directory environment variable of a domain user using powershell on a server that will have no 'knowledge' of that domain user until i instantiate the installed objects?
When it comes to the IIS app pools I have found a (hacky) solution that could potentially work in this:
https://serverfault.com/questions/711470/applicationpoolidentity-environment-variables-iis
that requires me to set the app pool to run as the profile, fire up the pool, snoop registry keys, obtain a SID, and then modify registry keys to set the environment variable for the temp drive.
Is there an easier way? And how could i do this for services and scheduled tasks?
Pie in the sky - i write one powershell script to modify the temp env parameter for this one domain account before installing any of these objects and then when they are installed it "just works".
Any suggestions?

Managing Multiple servers in an environment with Powershell DSC

I want to manage the servers in our staging pipeline with Powershell DSC (push model). The servers map to the environments as following
Development: 1 server
Test: 2 servers
UAT: 2 servers
Production: 2 servers
The server(s) within one environment do have the same configuration. But the configuration is different between the environments. I wanted to go with the push model because I do not have to setup a pull server.
Powershell DSC offers the option to manage the configuration via configuration data in a separate file But this comes with the caveat that you need to specify a node name that matches the respective server name. And that means, I need to copy the configuration data for each server in one environment. And when changing the configuration I need to remember that there is a second place where I need to update the configuration value.
Additionally, I do not really care about the server names. If the servers are exchanged tomorrow for new servers, the configuration should be just applied which is relevant to the environment.
What is the best practice approach to manage multiple servers within one environment with the same configuration?
Check the links, I think they cover scenerio
Using A Single DSC Configuration for Multiple Servers
enter link description here
DSC ConfigurationNames with multiple nodes
enter link description here
The mof file that gets produced does not contain the nodename inside it. So as long as you build a generic configuration, you can rename it after the fact at deploy time.
You can create one config for each environment with some generic name. Then enumerate the list of servers and make a copy of the config for each one with that servers name.
You can take it a step further. Have a share where you create a folder for each server that matches the server's name. Then copy the mof for that server into that folder with a name of localhost.mof. You can then run Start-DSCConfiguration -Path \\server\share\$env:computername from that machine as part of my deployment script.

Does Chef powershell_script have limited privileges?

I am encountering several situations where, in a Chef recipe with powershell_scipt, a command appears to fail, whereas if I run the same command in powershell outside of Chef, the same command works.
The two in particular are "regedit", which I am trying to use to set a key for app compatibility and the other is "net use z:...." to created a mapped drive. Both of these seem to work fine if I run them in powershell, but if I use them inside a recipe inside powershell_script, they don't appear to do anything.
So I'm wondering is this because Chef runs commands that are inside powershell_script at some lower privilege level?
Also if so, how do I change it so that the regedit and net use would work?
Thanks,
Jim
EDIT 1: This seems to work for adding the registry entry I needed:
registry_key "HKEY_CURRENT_USER\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags" do
values [{
:name => "{2b9034f3-b661-4d36-a5ef-60ab5a711ace}",
:type => :dword,
:data => 00000004
}]
action :create
end
That prevents the compatability popup that I am getting when we run the Sharepoint installer.
EDIT 2: I hope that this is ok, but for the record and more visibility and hope that I remember this, I found this re. mapping drives in Windows and Chef:
Mount windows shares on a windows node with Chef
and:
https://tickets.opscode.com/browse/CHEF-1267
I haven't tried that yet, but that seems like the answer to my drive mapping need.... hopefully..
The chef client service runs as Local System (SYSTEM) by default.
In Windows, that user has full privileges on the local system, like root basically, but on the network it authenticates as the computer object.
So it you are trying to use regedit to change something in for example HKEY_CURRENT_USER then you need to remember that the code will not see the same "current user" as you will when you run it in interactively. Also, regedit is an .exe; you should really do what you need through the PowerShell providers or .Net objects.
For net use you are trying to map a drive. It's likely that the computer account doesn't have the rights to the share that your user has. Again, net.exe is a separate executable. net use maps a drive to a drive letter (usually) and you shouldn't be doing that in a configuration script, in my opinion. You should access the UNC path directly, but either way I still think that you're probably running into a permissions issue here.
You could change the credentials of the service to use a user account that has all the rights you want, but before doing something like that you should consider changing your workflow to not need that.