Deploy resources using Powershell DSC pull server - powershell

I am trying to deploy Powershell modules from my https pull server but couldn't. I don't know what I'm missing here. These are things which I already did or tried:
Setup a https based pull server using instructions outlined at https://msdn.microsoft.com/en-us/powershell/dsc/pullserver
Register a pull client using instructions mentioned here https://msdn.microsoft.com/en-us/powershell/dsc/pullclientconfignames
On my pull server I've placed modules under C:\Program Files\WindowsPowerShell\DscService\Modules as xWebAdministration_1.12.0.0.zip and xWebAdministration_1.12.0.0.zip.checksum
If I double click xWebAdministration_1.12.0.0.zip file it contains: DSCResources, Examples, Tests, HighQualityResourceKitPlan.md, README.md and xWebAdministration.psd1 at root level, Under DSCResources I have all MSFT_* folders and other stuff
When I run a custom configuration on my client node which requires xWebAdministration module, I get module not found exception.
I looked at client's event viewer for errors but don't see anything related.
Any help is appreciated.
Thanks!

Have you tried Publish-DSCModuleAndMof from xPSDesiredStateConfiguration?
You have to run Install-Module xPSDesiredStateConfiguration first.
You can find an example of using this here.

Related

Unable to configure Self hosted agent on Azure DevOps in MacOs

I am trying to install azure Devops self hosted agent on my MacBook but I am having some trouble doing this. Followed the below url for configration
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/v2-osx?view=azure-devops
After downloading the vets-agent tar file when I run the command its throwing error. Can someone help me?
When you receive error messages, please carefully read them; they often contain all of the relevant information you need to solve the problem on your own.
In this case, careful reading shows that you have a file named xxx.tar. You are running the tar command and pointing it to xxx.tar.gz. Thus, it's giving you a file not found error, which is absolutely correct, because there is no file with the .gz extension.

how to to host single-spa root-config and modules on a central server during development

I've been experimenting with single-spa for a while, and understand the basics of the developer experience. Create a parcel, yarn start on a unique port, add the reference to the import map declaration, and so on. The challenge with this is that as my root-config accrues more and more modules managing ports and import-maps starts to get tedious. What I want is to publish these modules to a central repository and load them from there (e.g., http://someserver.com/repository/moduleA/myorg-modulea.js, etc.).
I was recently introduced to localstack and started thinking maybe a local S3 bucket would serve for this. I have a configuration where builds (yarn build) are automatically published to an s3 bucket running on localstack. But when I try to load the root config index.html from the bucket I get the following JS error:
Unable to resolve bare specifier '#myorg/root-config'
I can access the JS files for each parcel and the root-config just fine via curl, so I suppose this would be a problem with any http server used in the same way. I can flip the root config to use the standard webpack-dev-server instead (on port 9000) and it works okay. So I'm guessing there's a difference between how a production build resolves these modules vs. the local build.
Has anyone tried something like this and got it working?
I had a similar issue that I got to work with http-server by adding each child .js file to a sub-folder in the root-config directory and launching the web server at the root-config directory level.
"imports": {
"#myorg/root-config": "http://someserver.com/root-config.js",
"#myorg/moduleA": "http://someserver.com/modules/moduleA/myorg-modulea.js",
"#myorg/moduleB": "http://someserver.com/modules/moduleB/myorg-moduleb.js",
"#myorg/moduleC": "http://someserver.com/modules/moduleC/myorg-modulec.js",
}
Note: By default, Single-SPA has an "isLocal" check before the current import mappings. You'll need to remove this if using a production build or it won't load the correct mappings.
<% if (isLocal) { %>
In my case, I was using localhost instead of someserver so I could navigate into the 'repository' folder and run npx http-server to get everything to run correctly.
I was hung up on this for a little while so hopefully this leads you in the right direction.
For the record, after trying several approaches to hosting a repository as described above, I found the unresolved bare specifier problem went away. So I have to chalk that up as just not having the right URL in the import map. Measure twice, cut once.

Does anyone have tried the HLF 2.0 feature "External Builders and Launchers" and wants to get in touch?

I'm getting my way through the HLF 2.0 docs and would love to discuss and try out the new features "External Builders and Launchers" and "Chaincode as an external service".
My goal is to run HLF2.0 on an K8s cluster (OpenShift). Does anyone wants to get in touch or has anyone already figured his way through?
Cheers from Germany
Also trying to use the ExternalBuilder. Setup core.yaml, rebuilt the containers to use it. I get an error that on "peer lifecycle chaincode install .tgz...", that the path to the scripts in core.yaml can not be found.
I've added volume bind commands in the peer-base.yaml, and in docker-compose-cli.yaml, and am using the first-network setup. Dropped out the part of the byfn.sh that would connect to the cli container, so that I do that part manually, do the create, join, update anchors successfully, and then try to do the install and fail. However, on the install, I'm failing on the /bin/detect, because it can't find that file to fork/exec it. To get that far, peer was able to read my external configuration, and read the core.yaml file. At the moment, trying the "mode: dev" in the core.yaml which seems to indicate that the scripts and the chaincode will be run "locally", which I think means it should run in the cli container. Otherwise, tried to walk the code to see how the docker containers are being created dynamically, and from what image, but haven't been able to nail that down yet.

Powershell DSC: Run regular code in DSC

I have a DSC I am creating for web server configuration. My website will be using HTTPS, meaning that I have to have a certificate in a store. I don't see any modules designed to do this, so I was wondering how I could run regular Powershell functions in a DSC but keep the good parts of the DSC.
My workflow is as follows:
1 . Check if certificate exists
If cert doesn't exist in the store, add it.
If the cert does exist, grab the the Thumbprint to use in the xWeb xWebsite.BindingInfo.MSFT_xWebBindingInformation.CertificateThumbprint property.
As of now, I've got the code written to do the following actions, but I would still like to make use of the [DependsOn] functionality found in DSCs so I can handle any errors involved with creating/accessing the certificate.
Any help is greatly appreciated.
https://serverfault.com/a/638926/236470
Use Microsoft's xCertificate module (with the xPfxImport resource) for this purpose.
Full disclosure: I wrote the original version of this resource (it's open source in Microsoft's repo now and has since had other contributors).
To answer your original question, you would use the Script resource to run arbitrary code without creating your own resource.

PowerShell 5.0 DSC and Imports

I want to define a single configuration that uses Install-Package to install xSystemSecurity and then import it and define a resource that disable IE ESC.
It this possible to do in a single Configuration with a Script Resouce and a xSystemSecurity Resource?
As soon as I try to import xSystemSecurity at the top of the Configuration, DSC blows up because it's not installed yet.
DSC validates all resources in a configuration before it applies any changes. In order to do this, all resources must already be present on the box or available from a pull server. This means that you can't both install and use a resource in the same configuration. The best solution is to use the pull server to deploy the resource. If you can't use the pull server, then you have to use a 2 step process.
Here is an easy way to setup a module repository using a file share http://nanalakshmanan.com/blog/Push-Config-Pull-Module/ Once setup your configuration should work as DSC can pull down the required module from the file share