How to update details to existing spl-token - metadata

I am trying to update the token metadata for my existing token. Is there any other steps needed to be carried out to update the token metadata correctly?
I have updated the token metadata by using Strata Protocol Token Launchpad (https://app.strataprotocol.com/edit-metadata) 1 week before. But it didn't reflect on the token logo and name at all on the Solana Explorer.
Token Address: https://explorer.solana.com/address/7KG5WNqNbUdXY5MBX7TUVZMTSD5cGoYxwYwry96GD1sM/metadata
How can I update token name and logo correctly?
How can I edit or remove the social channels link with metadata?
Any help is appreciated as always. Thanks in advance.

The new standard to SPL token metadata is actually that the same of NFT's now.
Each token will have it's own token metadata account (which will need creating if token was created via the SPL Token CLI), in your case once was created for you if you used Strata to create the token.
Updating the SPL token metadata is now as simple as updating an NFT on Solana. Various tools such as https://metaboss.rs is one, Strata website should also work to update the token tool. I've also got a site which after testing updates tokens too just fine but at the moment adds in some unessaccery data related to NFT's, but functions. If you wish to try that let me know in a comment.
If Strata doesn't work for you you could try Metaboss.
You'll need to first upload an image and a JSON file to arweave that you would then link to each other.
Upload Image to Arweave or similar.
Create and upload a new json file (to arweave or similar) with a few attributes inside it.
https://github.com/solana-labs/token-list
You can see from the above document at Solana Labs that you need
{
"name": "TOKEN_NAME",
"symbol": "TOKEN_SYMBOL",
"description": "TOKEN_DESC",
"image": "TOKEN_IMAGE_URL"
}
in your JSON file. Your image URI should be a link to the image you uploaded in step one.
You can then use the update commands at metaboss to upload some onChain details such as name, symbol, and the URI.
https://metaboss.rs/update.html

Hmm got same problem here,
In my case, I setAuthority of the SPL token to null right after initiate and mint to make sure no one could ever mint more.
What they said "You must have mint authority in order to create or update the metadata", mean that existing spl-token have no way to update their metadata.

Related

What is the workflow for a basic Auth OIDC with Keycloak

I have keycloak on docker (v20.0.2) and as you know some versions change some or good part of the UI, so is hard to follow tutorials around the web...
I am trying to follow this particular tuto
https://developers.redhat.com/blog/2020/11/24/authentication-and-authorization-using-the-keycloak-rest-api#keycloak_sso_demo
that seems the more updated. My keycloak is actually behind traeffic and thomseddon/traeffic-fordward-auth with a docker-compose file (but the connection through traeffic is good and I have acces to admin UI)
So on step 10 of the tutorial things change for me, I have to look for that particular view inside:
Click on lateral menu Client Scope
Click on button Create client scope
Give a name to the scope, and click on Tab Mapper
All mappers are predefined... so there is no "New mapper" don't understand this bit
then just follow the tuto
With that series of steps I get an error when retriving the token...
https://keycloak:8443/realms/education/protocol/openid-connect/token
enter image description here
(this are fake local data from the realm I created for testing)
that responds with a or something similar I have also tried to change the grant_type to password, and the same happens can not query the token....
{
"error": "invalid_client",
"error_description": "Invalid client or Invalid client credentials"
}
But if I do not link a user with an scope/role as in the tuto suggest then I get the token, but of course I want to use the role or scope to limit who can see which endpoint and who can not
Any step that I'm missing from this update, do you have the same error?
Thank you in advance
I have tried to run it with different combinations of options to see if there is a toggle that actually allows me to fetch the token
Also with different types of grant_type
I will build an API in Python (I don't know Java and prefer Json instead of XML) that connect to this keycloak to allow users or not based on their scope/role/permission or something
I need to be able to block user so if user Student try to access an url from another Student he get blocked that url. So is based on the role or scope or I don't know which is prefered or easer to accomplish, the mission is to block users or not based on a factor that could be used for this in keycloak.

Can't create working Shared Access Signature for Azure Files

I need to create a SAS so I can create an Azure SQL Extended Event session. The event session needs a file data storage target via SAS and I can't create one that works. Here's what I've tried:
Identified a storage account that's not blob; just general. I'm pretty sure I need general so I can create files directly.
Created a file share therein.
Using azure storage explorer, right clicked on that file share and selected, "Get Shared Access Signature."
Checked Read, Write, List and created.
This gives me the URL https://mystorageacct.file.core.windows.net/xevents?st=2018-12-25T16%3A29%3A51Z&se=2018-12-29T16%3A29%3A00Z&sp=rwl&sv=2018-03-28&sr=s&sig=mysig
If I just try to follow this URL or create a CloudFile object with it in code, I get the oft-seen error, Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. Signature did not match. String to sign used was rwl 2018-12-25T16:29:51Z 2018-12-29T16:29:00Z /file/cs7f0fbc5104d4ax435dx883/$root 2018-03-28
Tried adding in comp=list&restype=container as suggested here. No joy.
Ensured I have no access policy in use.
Went to the azure portal and created a different SAS at the storage account level (couldn't see a way to create it on the file share). That gave me this "File service SAS URL": https://mystorageacct.file.core.windows.net/?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2018-12-30T01:25:16Z&st=2018-12-26T17:25:16Z&spr=https&sig=mysig
If I try that URL I get Value for one of the query parameters specified in the request URI is invalid. I don't know which parameter is in question, they look fine to me, but I don't know what the value srt=sco indicates. Based on this doc srt is resource type, but I don't know what the value sco indicates.
Very confused, looking for suggestions.
For any future readers, extended event sessions confusingly (because they write a file) require blob containers, not general/file/queue containers. At least I could only get them to work that way.
You are probably confused by how the SAS URLs are presented. In fact, the SAS URLs you got just provide examples of how to use the SAS token, they can't be used directly. Hence you saw those errors occur.
Service-level SAS URL, i.e. the one you got from Storage Explorer.
It's in the format of fileEndPoint/fileShareName?SASToken. The SASToken gives us permission to operate on all files inside the specified file share. To leverage the token, we need to add fileName in the URL, i.e. fileEndPoint/fileShareName/fileName?SASToken.
comp=list&restype=container is to list blobs in Blob Container, not for File Share.
Account-Level SAS URL, the one you got form Azure portal.
It's in the format of fileEndPoint/?SASToken. Likewise, we need to complement the URL to make it valid, i.e. fileEndPoint/fileShareName/fileName?SASToken. Note that this SASToken has all permission on all Storage resources because all choices are checked.
sco means we have permission to operate on service, container, and object, which indicates the scope of permission, check doc for details.
I am not familiar with Azure SQL Extended Event session, but if you only need to work with files inside one file share, 1st is enough.

JSON Request is not configure with zap Authentication.

I am using ZAP security testing tool.but at the point of Authentication by username and password of a JSON Request, I face problem to configure these. I checked all links and blogs too. but I can't get the proper step by step solution on it.
Request code:-
{"userName":"cwc_patna","password":"33a0d2e93e0ad396b7c9374bbbc83a58"}
Response code:-
{"userId":72,"userName":"cwc_patna","password":"33a0d2e93e0ad396b7c9374bbbc83a58","emilId":"pratyush#sdrc.co.in","userTypeId":1,"viewName":"cwc","isLive":null,"isActive":null,"isApproved":null,"sjpuAccess":null,"userUserTypeFeaturePermissionMapping":null,"area":null}
That functionality was only just added last week: https://github.com/zaproxy/zaproxy/pull/4624
If you want to use it, you'll either have to use a weekly: https://github.com/zaproxy/zaproxy/wiki/Downloads#zap-weekly
Or, wait for the next full release (likely 2.8.0).
The corresponding PR to update the help content for the new JSON Authentication functionality is here: https://github.com/zaproxy/zap-core-help/pull/188/files if you want to check it out.
You set it up the same way you would for form based authentication. Make sure you define a Logged-in or Logged-out Identifier (or both). Here's some screenshots to help you along:
Manually configure the Authentication for your Context:
Use the Site Tree Context menu(s) to set it up:
Here's an additional help link that might assist you in getting authentication setup: https://github.com/zaproxy/zaproxy/wiki/FAQformauth

Import API not working in sisense

I was trying to use the dashboard import API from v1.0 which can be found in the REST API reference. I logged in to http://localhost:8083/dev/api/docs/#/ , gave the correct authorization token, and a dash file in the body, and a 24 character importFolder and hit the Run button to fire the API. It returns 201 as HTTP response, which means the request was successful. However, when I go back to the homepage, I don't see any new dashboard imported in to the said folder. I have tried both cases, where the importFolder exists (already created manually be me), and does not already exist, where I expect the API to create it for me. Neither of these, however, create/import the dashboard
A few comments that should help you resolve this:
When running the command from the interactive API reference (swagger) you don't need the authentication token, because you're already logged in with an active session.
Make sure the json of your dashboard is valid, by saving it as a .dash file and importing via the UI
The folder field is optional - if you leave the field blank, the dashboard is imported to the root of your navigation/folders panel.
If you'd like to import to a specific folder, you'll need to provide the folder ID, not its name, which can be found several ways such as using the /api/v1/folders endpoint, where you can provide a name filtering field and use the oid property of the returned object as the value for the folder field in the import endpoint.
If you can't get this to work still, use chrome's developer tools to look at the outgoing request when you import from the UI and compare the request (headers, body and path) to what you're doing via swagger in order to find the issue.

Fetching Contact image from SugarCRM

I'm trying to integrate my rails app with SugarCRM. Is it possible to fetch the Contact picture from SugarCRM using REST API? Please let me know.
To get the profile image for a user do the following:
Call the login method through REST
Call the get_entry_list method through REST, with the following parameters:
Module: Users
Query: users.user_name = 'xxxx'
Select_fields: picture
The response contains the filename for the profile image, which is stored in /uploads.
However, it is not possible to view the image in that folder due to .htaccess restrictions for security reasons, but other options exist:
Extend the REST API with a method to serve profile images (similar to get_document_revision)
Login on the server from your rails app and get the image
Create a simple entrypoint+module in SugarCRM, which can show the picture
Remove the .htaccess restiction for images (if it doesn't create a security risk in your setup)
In such scenario, I faced a problem where the upload folder stores file with name of the record id, i.e the GUID without file extension.
So to cop up with this I did write a function to copy the file at same hierarchy but with its extension.
Example:
A png extension file at upload folder with name say, '32sdft-tg35f-Tuhis-675rtyf-77666-46dgc' will end up as, '32sdft-tg35f-Tuhis-675rtyf-77666-46dgc.png'
Now only the path will be require to render the image.
Rest all things as applicable as suggested by our friend, Kare !!