Update Hidden Settings After Initial Upload - solana-web3js

I'd like to change my Candy Machine from having hidden settings to no longer be hidden.
Initially, the Candy Machine is created with hidden settings like these:
hiddenSettings: {
name: "Name",
uri: "uri..."
hash: '44kiGWWsSgdqPMvmqYgTS78Mx2BKCWzd',
}
I have attempted updating the candy machine to set the value of hidden settings to null, but this does not change any of the NFTs' metadata or seem to do anything at all.
Is there a way to unhide the assets after initializing them to have hiddenSettings?

Very late but answering for others who may have the same question...
Unfortunately it's not that simple. The "hidden" settings for a candy machine determines how the NFTs are uploaded. With it set, all NFTs will be uploaded with the same URI - the placeholder image and metadata.
Once an NFT is uploaded and minted, the candy machine does not control its metadata. Even if you could remove the "hidden settings" field, this would not reveal your NFTs. In fact you need to keep the hidden settings (in particular the hash) for a reason listed below. Instead, you need to update the NFTs themselves, setting the new URI to the actual metadata file.
The tool which makes this easier is Metaboss. It that can explore the blockchain and make changes for you. In particular, you can find the mint accounts of the NFTs which have been minted, and update the URIs. Updating will require your keypair for the wallet with the update authority for the collection.
After installing Metaboss, the command
metaboss snapshot mints -c [YourCandyMachineAddress] --v2
will output an array of the mint accounts to ./[YourCandyMachineAddress]_mint_accounts.json
You can change the output destination with the -o flag. Then for a given NFT you can find the metadata using
metaboss decode mint -a [MintAddress]
which will output the metadata to ./[MintAddress]. Again the output destination can be changed. You will see that this metadata has the URI of your placeholder. The name field, like "SomeCollection #1", identifies which NFT this is. By changing the URI to the actual URI for that NFT, you reveal it. Then wallet and marketplace apps will see the real NFT. You can do this with
metaboss update uri -k [/path/to/keypair.json] -a [MintAddress] -u [https://somestorage.com/realurifornft1]
All these commands have good nested documentation with --help. Obviously doing this manually for a large collection is very impractical. I have uploaded a bash script for this here. Read the script for usage info.
Now you may be thinking "isn't editing the NFT metadata like this shady? Couldn't someone use this to maliciously change my NFT?" You would be correct. To prevent this, the hash field from the hidden settings is very important. This should be the MD5 hash of the cache file created when you launched your candy machine, which contains the "real" metadata URIs. If you were to change the metadata to a different URI, you could totally change the NFT. This hash field exists so that users can confirm after reveal that the real URIs have not been changed, by reconstructing that cache file and comparing the MD5 hashes. Hence you should not remove your hidden settings - without that hash, your collection cannot be trusted.

You can not unhide. Only solution is creating a new candy machine.

Related

How can I manually edit the list of recently opened files in VS Code?

I rely heavily on the File: Open Recent… command to open frequently used files, but yesterday my local Google Drive folder got moved to a new location and now I can no longer access any of the files in that folder through the Open Recent panel because the paths don't match.
The fix would be as simple as replacing "/Google Drive/" with "/Google Drive/My Drive/" but I have no idea what file contains the list of files that appears in the recently opened panel.
I'm assuming it's somewhere in ~/Library/Application Support/Code but not sure where.
I was wondering the same thing the other day and found this while searching for a solution, so I took some time to investigate it today.
It's been a a few weeks since you posted, so hopefully this will still be of help to you.
Also, I'm using Windows and I'm not familiar with macOS, but I think it should be easy enough adjust the solution.
Location of settings
Those setting are stored in the following file: %APPDATA%\Code\User\globalStorage\state.vscdb.
The file is an sqlite3 database, which is used as a key-value store.
It has a single table named ItemTable and the relevant key is history.recentlyOpenedPathsList.
The value has the following structure:
{
"entries": [
{
"folderUri": "/path/to/folder",
"label": "...",
"remoteAuthority": "..."
}
]
}
To view the current list, you can run the following command:
sqlite3.exe -readonly "%APPDATA%\Code\User\globalStorage\state.vscdb" "SELECT [value] FROM ItemTable WHERE [key] = 'history.recentlyOpenedPathsList'" | jq ".entries[].label"
Modifying the settings
Specifically, I was interested in changing the way it's displayed (the label), so I'll detail how I did that, but it should be just as easy to update the path.
Here's the Python code I used to make those edits:
import json, sqlite3
# open the db, get the value and parse it
db = sqlite3.connect('C:/Users/<username>/AppData/Roaming/Code/User/globalStorage/state.vscdb')
history_raw = db.execute("SELECT [value] FROM ItemTable WHERE [key] = 'history.recentlyOpenedPathsList'").fetchone()[0]
history = json.loads(history_raw)
# make the changes you'd like
# ...
# stringify and update
history_raw = json.dumps(history)
db.execute(f"UPDATE ItemTable SET [value] = '{history_raw}' WHERE key = 'history.recentlyOpenedPathsList'")
db.commit()
db.close()
Code references
For reference (mostly for my future self), here are the relevant source code areas.
The settings are read here.
The File->Open Recent uses those values as-is (see here).
However when using the Get Started page, the Recents area is populated here. In the Get Started, the label is presented in a slightly different way:
vscode snapshot
The folder name is the link, and the parent folder is the the text beside it.
This is done by the splitName method.
Notes
Before messing around with the settings file, it would be wise to back it up.
I'm not sure how vscode handles and caches the settings, so I think it's best to close all vscode instances before making any changes.
I haven't played around with it too much, so not sure how characters that need to be json-encoded or html-encoded will play out.
Keep in mind that there might be some state saved by other extensions, so if anything weird happens, blame it on that.
For reference, I'm using vscode 1.74.2.
Links
SQLite command-line tools
jq - command-line JSON processor

recover lost gpg password

I found my old .gnupg directory in a backup and want to use it again. Unfortunately I have lost my password but I have some ideas of what the password was. I have not much understanding of gpg and pgp, however I know the basics of asymmetric cryptography.
My challenge now is to recover that key/password that I might be able to guess by some structure that I recall. So I will need to use some permutation engine that assembles various pieces of that password and checks if it is correct. I could write a script that does but I also could use john the ripper with gpg2john. Trying to figure out which way to go I face some obstacles:
My .gnupg directory is from 2005, created on a Sun system at that time. The directory contains a pubring.gpg and the newer format pubring.gpx. A subdirectory private-keys-v1.d contains 5 .key files.
Trying john first I seem to provide the wrong input.
gpg2john ~/.gnupg/pubring.kbx
File ~/.gnupg/pubring.kbx
can't find PGP armor boundary.
gpg2john ~/.gnupg/pubring.gpg\~
<lots of different messages like>
Hash material(5 bytes):
Sub: image attribute(sub 1) Image encoding - JPEG(enc 1)
Reason - No reason specified
lots of other stuff
Error: No hash was generated for ~/.gnupg/pubring.gpg~, ensure that the input file contains a single private key only
How can I generate a file that gpg2john expects as input?
All approaches of mine to extract the private key failed because I need the key for that process, which I want to recover ...
For the manual approach I would need a way to test if my password is correct. What is the easiest approach here? I am a bit confused because I have 5 .key files. Which one is my private key?
gpg --list-keys | grep "My Name" gives me back 3 entries different from the key names in private-keys-v1.d. The keys are labeled [ultimate], [expired], and [revoked].
Whenever I ask gpg to do anything like gpg --export-secret-keys ID > exportedPrivateKey.asc I am getting 2 messageboxes asking for a passphrase for 2 keys. These Ids are found in private-keys-v1.d.
How can I make gpg ask me only for the password of the [ultimate] key?
(In this article for me a certificate is the private-public-key tripplet that gpg is using. I might be unclear in what I say for anyone really understanding the concept:)
Ps: I am not sure if the password that I might re-construct belongs to the revoked certificate. If so, can I unlock the private key of the revoked certificate? Can I generate a new certificated based on the revoked one? (I guess not because otherwise revoking does not have any positive security effect). What do I win by getting back the password to a revoked certificate?
I personally believe, that gpg2john needs asc file and your approach to export it using gpg --export-secret-keys ID > exportedPrivateKey.asc is right. Problem, that you does not succeed is perhaps in this change: https://github.com/open-keychain/open-keychain/pull/1182/files
They "disabled" exporting private key with passphrase without entering given passphrase. It is not photographically needed for such operation, but due discussion in issue https://github.com/open-keychain/open-keychain/issues/194 it has been implemented.
I suggest you to export given key using custom compiled version of gpg with given commits reverted.
I'm not sure if I missed something, but have you simply tried making a backup of the keyring (copy the whole .gnupg folder to be safe) and then deleting keys from it until only the desired one is left? I can't promise that this will work, I always used john with --armor-exported keys.
By the way, the filenames that you see in the private-keys-v1.d subfolder are the keygrip and don't match your key IDs.
You can match keys to their keygrip by using the --with-keygrip parameter (e.g., gpg --with-keygrip --list-secret-keys).
PS: You may find this tutorial helpful — https://github.com/drduh/YubiKey-Guide — while it's written for YubiKey users, it has many advanced concepts that are relevant in general.

Unable to run experiment on Azure ML Studio after copying from different workspace

My simple experiment reads from an Azure Storage Table, Selects a few columns and writes to another Azure Storage Table. This experiment runs fine on the Workspace (Let's call it workspace1).
Now I need to move this experiment as is to another workspace(Call it WorkSpace2) using Powershell and need to be able to run the experiment.
I am currently using this Library - https://github.com/hning86/azuremlps
Problem :
When I Copy the experiment using 'Copy-AmlExperiment' from WorkSpace 1 to WorkSpace 2, the experiment and all it's properties get copied except the Azure Table Account Key.
Now, this experiment runs fine if I manually enter the account Key for the Import/Export Modules on studio.azureml.net
But I am unable to perform this via powershell. If I Export(Export-AmlExperimentGraph) the copied experiment from WorkSpace2 as a JSON and insert the AccountKey into the JSON file and Import(Import-AmlExperiment) it into WorkSpace 2. The experiment fails to run.
On PowerShell I get an "Internal Server Error : 500".
While running on studio.azureml.net, I get the notification as "Your experiment cannot be run because it has been updated in another session. Please re-open this experiment to see the latest version."
Is there anyway to move an experiment with external dependencies to another workspace and run it?
Edit : I think the problem is something to do with how the experiment handles the AccountKey. When I enter it manually, it's converted into a JSON array comprising of RecordKey and IndexInRecord. But when I upload the JSON experiment with the accountKey, it continues to remain the same and does not get resolved into RecordKey and IndexInRecord.
For me publishing the experiment as a private experiment for the cortana gallery is one of the most useful options. Only the people with the link can see and add the experiment for the gallery. On the below link I've explained the steps I followed.
https://naadispeaks.wordpress.com/2017/08/14/copying-migrating-azureml-experiments/
When the experiment is copied, the pwd is wiped for security reasons. If you want to programmatically inject it back, you have to set another metadata field to signal that this is a plain-text password, not an encrypted password that you are setting. If you export the experiment in JSON format, you can easily figure this out.
I think I found the issue why you are unable to export the credentials back.
Export the JSON graph into your local disk, then update whatever parameter has to be updated.
Also, you will notice that the credentials are stored as 'Placeholders' instead of 'Literals'. Hence it makes sense to change them to Literals instead of placeholders.
This you can do by traversing through the JSON to find the relevant parameters you need to update.
Here is a brief illustration.
Changing the Placeholder to a Literal:

Recovering data from Firebird database partially-encrypted by ransomware

our test server was hacked and they installed a ransomware (Cry36) for which there is no solution to date. We also didn't keep any snapshots up to date (lesion learned).
Since it's only a test server, i am not too worried. But we had stored in our Firebird DB (v2.5) a bunch of work which i would like to save.
Looking at the database in a hex editor, i can see that the data is encrypted up until offset 00006430.
Looking at the structure of the firebird database it says that all the headers are encrypted (Header page, PIP,..., Data page).
All the data is still there.
I've tryed with gfix and even copying the headers from an older version of the db. But while it does fix the db, the headers are wrong and most of the new pages are removed.
Does anyone have any idea how to restore the database or extract the tables?
Regards
I have used this method restoring ransomware files encrypted on hard drives from any ransomware by renaming the file in question back to its original filename and extension. You may be able to apply the same method to revert the data or database back to the pre-encrypted version of the file/s or data/bases.
From my testing:
the ransomed file = is compressed and or simply renamed, the encryption is either not applied actually but only implied or the containing file or renamed file is encrypted but the original file is never touched. Simply rename back to original and you can access the file as you could be for the attack. Example:
This is the Ransomed file:
Adobe Acrobat XI Pro 11.0.20.zip.id[42AF04FF-2275].[supportcrypt2019#cock.li].Adame
This is the Ransomed file, renamed and fixed:
Adobe Acrobat XI Pro 11.0.20.zip
The removed portion of the FileName is:
.id[42AF04FF-2275].[supportcrypt2019#cock.li].Adame
Upon renaming the file, you will be prompted for approval to change the application type/ file type for which the file will be opened (Back to its original state), and what application will open it (its original designation as determined by the FileType preset after the FileName. The reason the file doesn't work when ransomed is the final file extension renaming scheme, whereas in this case .ADAME is not a real file type, but made up, and no program will or can open it. Thus, the file can not be opened as named.
You would need to do this for each file individually, could you post more information on the database file and encryption information as this should work for you as well. The Ransom Methodology should be the same. I can not identify the naming scheme used on your system without more information pertaining to unusual or new/unidentified portions of code injected throughout your instance.
For Renaming multiple files you could try an application such as "Advanced Renamer" for bulk processing.

Export AD structure from specific OU, then re-create structure in new domain

I've researched and found the way to export our active directory information for our application is like this:
csvde -d OU=MyAppsOU,DC=dot,DC=testdmz,DC=lan
-f C:\temp\addump_ou.csv -r (objectClass=organizationalUnit)
Now, I've read that to do an import from that file, you just have to add the -i option to the line like this:
csvde -i -d OU=MyAppsOU-New,DC=dot,DC=newdmz,DC=lan
-f C:\temp\addump_ou.csv -r (objectClass=organizationalUnit)
Obviously, I'm very scared to try this as I don't want to blow away anything. My questions are:
Does specifying the OU=MyAppsOU-New create the new OU structure with that specific name? (I'm just trying to be 100% positive)
Does specifying the different domain name (newdmz) just update all of the data in the file to contain the new domains name?
or
Do I need to modify the exported csv file to change the domain name (testdmz) to what the new domain name will be (newdmz)?
Is there a different way I should be doing this?
I just want to re-create the OU structure without groups, roles (which are groups) and users. I will probably do those in a different process because we have different usernames for test and production.
Wow ! lost of question here, but according to me not enougth.
Begining by the end. CSVE.EXE is really not the exact tool I would use. As a Directorie developper I prefer LDIFDE.EXE, because it generates LDIF (LDAP data Interchange Format) which is more standard and more readable. You can also have a look to tools like ADAMSync.EXE that allow to synchronize two directories in AD world (but it's a big hammer for whant you want to do here)
Now choosing LDIFDE.EXE you will see that LDIF format is almost importable as is, but you nned to remove operational attributes (system attributes) from the file. The best way is to take them during the rxport. So you will use -L to only export the attributes you need or -O option to omit operational attributes.
To import in another domain, you will use -C option to change original domain part (DC=dot,DC=testdmz,DC=lan) by the new domain part.
Try it before in a virtual machine.