Export AD structure from specific OU, then re-create structure in new domain - command-line

I've researched and found the way to export our active directory information for our application is like this:
csvde -d OU=MyAppsOU,DC=dot,DC=testdmz,DC=lan
-f C:\temp\addump_ou.csv -r (objectClass=organizationalUnit)
Now, I've read that to do an import from that file, you just have to add the -i option to the line like this:
csvde -i -d OU=MyAppsOU-New,DC=dot,DC=newdmz,DC=lan
-f C:\temp\addump_ou.csv -r (objectClass=organizationalUnit)
Obviously, I'm very scared to try this as I don't want to blow away anything. My questions are:
Does specifying the OU=MyAppsOU-New create the new OU structure with that specific name? (I'm just trying to be 100% positive)
Does specifying the different domain name (newdmz) just update all of the data in the file to contain the new domains name?
or
Do I need to modify the exported csv file to change the domain name (testdmz) to what the new domain name will be (newdmz)?
Is there a different way I should be doing this?
I just want to re-create the OU structure without groups, roles (which are groups) and users. I will probably do those in a different process because we have different usernames for test and production.

Wow ! lost of question here, but according to me not enougth.
Begining by the end. CSVE.EXE is really not the exact tool I would use. As a Directorie developper I prefer LDIFDE.EXE, because it generates LDIF (LDAP data Interchange Format) which is more standard and more readable. You can also have a look to tools like ADAMSync.EXE that allow to synchronize two directories in AD world (but it's a big hammer for whant you want to do here)
Now choosing LDIFDE.EXE you will see that LDIF format is almost importable as is, but you nned to remove operational attributes (system attributes) from the file. The best way is to take them during the rxport. So you will use -L to only export the attributes you need or -O option to omit operational attributes.
To import in another domain, you will use -C option to change original domain part (DC=dot,DC=testdmz,DC=lan) by the new domain part.
Try it before in a virtual machine.

Related

Update Hidden Settings After Initial Upload

I'd like to change my Candy Machine from having hidden settings to no longer be hidden.
Initially, the Candy Machine is created with hidden settings like these:
hiddenSettings: {
name: "Name",
uri: "uri..."
hash: '44kiGWWsSgdqPMvmqYgTS78Mx2BKCWzd',
}
I have attempted updating the candy machine to set the value of hidden settings to null, but this does not change any of the NFTs' metadata or seem to do anything at all.
Is there a way to unhide the assets after initializing them to have hiddenSettings?
Very late but answering for others who may have the same question...
Unfortunately it's not that simple. The "hidden" settings for a candy machine determines how the NFTs are uploaded. With it set, all NFTs will be uploaded with the same URI - the placeholder image and metadata.
Once an NFT is uploaded and minted, the candy machine does not control its metadata. Even if you could remove the "hidden settings" field, this would not reveal your NFTs. In fact you need to keep the hidden settings (in particular the hash) for a reason listed below. Instead, you need to update the NFTs themselves, setting the new URI to the actual metadata file.
The tool which makes this easier is Metaboss. It that can explore the blockchain and make changes for you. In particular, you can find the mint accounts of the NFTs which have been minted, and update the URIs. Updating will require your keypair for the wallet with the update authority for the collection.
After installing Metaboss, the command
metaboss snapshot mints -c [YourCandyMachineAddress] --v2
will output an array of the mint accounts to ./[YourCandyMachineAddress]_mint_accounts.json
You can change the output destination with the -o flag. Then for a given NFT you can find the metadata using
metaboss decode mint -a [MintAddress]
which will output the metadata to ./[MintAddress]. Again the output destination can be changed. You will see that this metadata has the URI of your placeholder. The name field, like "SomeCollection #1", identifies which NFT this is. By changing the URI to the actual URI for that NFT, you reveal it. Then wallet and marketplace apps will see the real NFT. You can do this with
metaboss update uri -k [/path/to/keypair.json] -a [MintAddress] -u [https://somestorage.com/realurifornft1]
All these commands have good nested documentation with --help. Obviously doing this manually for a large collection is very impractical. I have uploaded a bash script for this here. Read the script for usage info.
Now you may be thinking "isn't editing the NFT metadata like this shady? Couldn't someone use this to maliciously change my NFT?" You would be correct. To prevent this, the hash field from the hidden settings is very important. This should be the MD5 hash of the cache file created when you launched your candy machine, which contains the "real" metadata URIs. If you were to change the metadata to a different URI, you could totally change the NFT. This hash field exists so that users can confirm after reveal that the real URIs have not been changed, by reconstructing that cache file and comparing the MD5 hashes. Hence you should not remove your hidden settings - without that hash, your collection cannot be trusted.
You can not unhide. Only solution is creating a new candy machine.

Why are there tag keys missing when downloading OSM data to Postgis / Postgresql?

I'm working on a routing application using OSM data in pgrouting. I'm using overpass-api to access the data from a specific bounding box. However, after downloading the data, there seem to be tag_keys missing from the data.
When inspecting the data using postgis or QGIS, certain tag_keys are there, like "highway", "oneway" or "maxpeed". However, others seem to be missing. In particular the tag keys "bicycle" (with possible values like "yes" or "no") or "access" are not included in the data. These tag keys are available on OSM online, however.
The following code is used to retrieve the data from OSM through Overpass-API and put it into PGrouting
CITY="Utrecht_west"
BBOX="4.9926,52.0698,5.0772,52.1172"
wget --progress=dot:mega -O "$CITY.osm" "http://www.overpass-api.de/api/xapi?*[bbox=${BBOX}][#meta]"
OSM2pgrouting converter
cd ~/Desktop/Utrecht
osm2pgrouting \
-f Utrecht_west.osm \
-d utrecht_west \
-U user
I expect these lines to download all data in the bounding box, but some tag keys seem to be missing. What am I doing wrong here?
edit: it seems to be a similar issue to this post, however, I cannot find another answer to a similar issue
I'm not familiar with osm2pgrouting. However it looks like mapconfig.xml doesn't include "bicycle" and "access" tags. You either need to add them or create your own config file. If you want osm2pgrouting to consider these tags during routing this might not be enough, though.

Using global variables in a ps1

I can't seem to find good enough solution to my problem. Is there a good way of grouping variables in some kind of file so that multiple scripts could access them?
I've been doing some work with Desired State Configuration but the work that needs to be done cannot be efficiently implemented that way. The point is to install Azure Build Agent on a server and then to configure it. There are some variables that really should not be inside a script file just copypasted like Personal Access Token. I just want to be able to easily change it without the need to go inside every script that would be using it. In DSC you can just make a .psd1 file and access the variables like for example AllNodes.NodeName. The config file invocation and parameters look like this:
.\config.cmd --unattended --url $myUrl --auth PAT --token $myToken --pool default --agent "$env:COMPUTERNAME" --acceptTeeEula --work $workDir'
I want to make the variable $myToken accessible from outside file for better security and having a centralized place from where I can change values. $myUrl is also important to have access to due to it changing with new update to Build Agent.
Thank you in advance for your effort. If anything is not clear please let me know.
I have two very different answers to your question, although either one of them may miss your point.
First, it's possible to define veriables inside your profile script. Most people only use the profile script to define a library of functions or classes. But a variable can be made global the same way.
I have a variable named $myps that identifies the folder where I keep my PS scripts (in subfolders).
When I start a session I generally switch to this directory (oops, I called it a folder above.
The second way involde storing values of variables in a CSV file, while the names are stored in the CSV header.i then have a quickie little comandlet that steps through a CSV file, record by record, generating different expansions of a template each time through.
These values are not quite global, but they can be used in more than one context.
Thank you for the help. Those are very useful solutions in some cases, but I dug a bit deeper and found solution that suits my purpose. Basically if you have a psd1 file suited for DSC use you can also access its content via normal ps1 file. For example:
NonNodeData =
#{
Pat = 'somePAT'
}
Let's say this section of a psd1 file called ENV.psd1 is on your local machine in C:/Configuration
To access the content of this file you have to make a variable inside your script and use Import-PowerShellDataFile like so:
$configData = Import-PowerShellDataFile -Path "C:\Configuration\ENV.psd1"
And now you are free to use anything stored inside ENV.psd1. For example if I want to extract my PAT from config file to be able to store it in a variable in the script:
$myPat = $configData.NonNodeData.Pat
Thanks to that I can just pass $myPat as a parameter when invoking config.cmd like so:
.\config.cmd --unattended --auth PAT --token $myPat
Keeping my code cleaner and easier for any future updates.

LibXML: Comment-out a block of Elements

IS there a way to add/initate a comment ( e.g. $dom->createComment ... ) such that it comments out an entire block of xml tags. Basically I want to turn-off the content between the comment.
For example, it would look like this:
<TT>
<AA>keep</AA>
<!-- comment to blocking
<BB>hideme1</BB>
<CC>hideme2</CC>
-->
<DD>d's content is good</DD>
</TT>
Actually this question is a pre-cursor to my attempt to figure-out a method to be able to markup/label/identify the changes to an xml files in support of new client software functionality, but be able to have the ability to remove / back-out these xml changes in the rare event the client needs to fall back to the previous software version (and no I can't just simply point back to the original xml file because the client is allowed to make minor modifications to existing node text values). This is all going to be controlled via a perl script and LibXML's core modules (I can't use modules the client doesn't have).
So basically I've identified three possible types of xml changes as a result of new client sw functionality:
1.) ADD new element node(s) (typically to support new sw functionality)
2.) DELETE element node(s), or blocks of (would be rare, but never-the-less a possibility)
3.) CHANGE node text values (rare, but the new sw may require a new value)
For all three types, the client needs the ability to back out the changes. One thing I was thinking to use is ATTRIBUTES since the existing xml files don't use them. For example, for an ADD change type, I could include an atribute like 'ADD="sw version 4.1"' . This way if it needs to be removed, I could just simply have the perl script find those attribute strings and delete them (using LibXML methods). Same thing with CHANGE change type - I could use an attribute like CHG="newvalue_oldvalue", then again use straight perl (or LibXML) to switch back the value based on the contents of the attribute. The DELETE change type is giving me a problem though (as welll as the others lol!). I want to be able to "keep" the deleted lines in the xml file soley for the purposes if the sw falls back a version (at some late point the perl script could eventually cleanup/delete them).
I know this is a lot, I'm new to LibXML (but not to perl). I was just wonder if any of you have any thoughts as to how to go about it or seen anything resembling this kind of request ... I'd be grateful for any kind of advice! Thank you...

Storing resources into "sub-bundle"?

I'd like to know if it is possible to store resources data into "sub-bundles" or "sub-packages" that I could put into my main AppBundle.
Indeed, I'd like to create a kind of player that reads "content packages", which are all organized the same way, with a standard hierarchical organization :
Package1:
- index.txt
- credits.txt
- Pictures/
-- Pic1.png
-- Pic2.png
- Movies/
-- intro.mov
-- outro.mov
My problem is that I can't find any way to make benefit of a hierarchical organisation into my Application package - I mean that I don't know how to distinguish "Folder1/index.txt" from "Folder2/index.txt", because I just use the "index.txt" identifier when I try to load the content of the file ...
I hope someone could help me, by the way I apologize for my poor english,
Cheers,
First, make sure that you're copying the files into subdirectories of Resources. One way to do this is to add your Package1 as a "folder reference" (select "Create Folder References" instead of "Recursively create groups" when you add it to the project). Another way to do it is to create a new Copy Files build action that copies the files into a subdirectory of Resources.
Once you've done that, you can use NSBundle's -pathForResource:ofType:inDirectory: to find the particular file you want. There's no need to go to the trouble of creating a sub-bundle (which is possible, but more complex).