Contentful dynamic ID creation on import - content-management-system

I am importing a bunch of Contentful content (entries), bulk upload. But I need to "supply" an id when I do. But, since the "entry" isn't created yet, there is no id. SO, I have to set one dynamically so I can "publish" it with their cli tool.
Does anyone know what sort of algorithm/crypto this id is based on, I'm assuming it's Base64? I figured to just use UUID, but curious if anyone knows.
This seems what I need: https://shortunique.id/
An example one would be:
72GgATwHoirbXUR5GECrL8

Related

AzureDevops - extract test steps to powerbi with odata query

I need to extract a table with test steps that correspond to each test case, from AzureDevops to PowerBI.
I was able to retrieve a list of tables that I can extract with odata, but none of them contains test steps. I’m attaching the metadata request and an extract of its results. extract
I’ve read that another possibility would be you to use an api query, but I’m not sure which one.
Does anyone know a possible solution?
Thank you.
According to the note in this documentation,
You can’t add fields with a data type of Plain Text (long text) or HTML (rich-text). These fields aren’t available from Analytics for the purposes of reporting.
And the type of the Steps field is Text (multiple lines), so they cannot be extract with odata.
You can try to use this REST API to get the detail of the testcase which will contain the Steps detail. Please refer the screenshot:

What is the best way to import data into holochain from another source, like mongo?

MongoDB => Holochain Rust DHT
How to import, if possible
If I am using a different app backend, like mongo, and I get my holochain set up correctly and configured, is there a way to get the data from mongo to holochain? How would I do that?
Here is the question in context
Definitely technologically possible; you could write a nodejs script, fire up a Holochain container with the holochain-nodejs library, and import all the data as one agent. Then when users join the HC-based network, they vouch for their identity in some way and 'claim' all the data as theirs.
Here's a sketch of how it could look:
you (let's call you 'agent 0') import all the data.
For each user, you create an 'anchor' with the user's ID (I'll explain anchors in a
sec) and link each piece of data to the anchor.
You also record that
user's password hash as a private entry on your own source chain. A
user joins the network and is required to prove continuity of
identity.
They do this by using node-to-node messaging to send their
user ID and their password hash to you privately. You authorise them
to claim their identity by publishing an entry that says that "agent
public key x = user ID". (You would probably want to link from your
authorisation entry to their user ID anchor and their public key too,
for convenience's sake.)
The user collects all their data by asking
for all the links to their user ID anchor.
The user then publishes
each piece of their data to their own source chain as a way of
'claiming' ownership of it.
Now, every redundant copy of the data in
the DHT has two authors in its metadata fields -- you and the user
that actually owns the data. Peers validate that piece of data by
saying, "Is agent 0 already the author of this piece of data?
If so,
has agent 0 published an authorisation entry that says that the new
author of this data is allowed to claim/republish it?"
Problems with this approach (not insurmountable):
Agent 0 has to be online all the time cuz they never know when a new
user is going to sign up and try to claim their data. Agent 0 has to
import a ton of data. (I don't think it'd be vastly
time-prohibitive though)
For relational data, there's the chicken-and-egg problem of how to
create links if the data doesn't exist. I'm thinking not of linking
data to data -- that can be done on initial import -- but linking
data to humans, who now have a public key which might not exist on
the DHT yet because they haven't joined the network. That would
always have to happen per-user once they join, and it could create
some cyclic dependency problems.
Anchors
Re: anchors, an anchor is just a pattern that consists of a base and a link -- the base is a simple string, so it's easy for anyone who knows the string to find it by hash. It acts as, well, an anchor to hang links off of. That's why I'm recommending using it to connect legacy user IDs to pieces of content. You can get sample source code for implementing the anchor pattern at https://github.com/holochain/mixins/tree/master/anchors (note that this is for the legacy version of Holochain, so it's written in JavaScript).
( answer provided by
pauldaoust )

How to import users in CRM 2011 with source GUID

We have three Organization tenents, Dev, Test and Live. All hosted on premise (CRM 2011. [5.0.9690.4376] [DB 5.0.9690.4376]).
Because the way dialogs uses GUIDs to refference record in Lookup, we aim to maintain GUIDs for static records same across all three tenents.
While all other entities are working fine, I am failing to import USERS and also maintain their GUIDS. I am using Export/Import to get the data from Master tenent (Dev) in to the Test and Live tenents. It is very similar to what 'configuration migration tool' does in CRM 2013.
Issue I am facing is that in all other entities I can see the Guid field and hence I map it during the import wizard but no such field shows up in SystemUser entity while running import wizards. For example, with Account, I will export a Account, amend CSV file and import it in the target tenant. When I do this, I map AccountId (from target) to the Account of source and as a result this account's AccountId will be same both in source and target.
At this point, I am about to give up trying but that will cause all dialogs that uses User lookup will fail.
Thank you for your help,
Try following steps. I would strongly recommend to try this on a old out of use tenant before trying it on live system. I am not sure if this is supported by MS but it works for me. (Another thing, you will have to manually assign BU and Roles following import)
Create advance find. Include all required fields for the SystemUser record. Add criteria that selects list of users you would like to move across.
Export
Save file as CSV (this will show the first few hidden columns in excel)
Rename the Primary Key field (in this case User) and remove all other fields with Do Not Modify.
Import file and map this User column (with GUID) to the User from CRM
Import file and check GUIDs in both tenants.
Good luck.
My only suggestion is that you could try to write a small console application that connects to both your source and destination organisations.
Using that you can duplicate the user records from the source to the destination preserving the IDs in the process
I can't say 100% it'll work but I can't immediately think of a reason why it wouldn't. This is assuming all of the users you're copying over don't already existing in your target environments
I prefer to resolve these issues by creating custom workflow activities. For example; you could create a custom workflow activity that returns a user record by an input domain name as a string.
This means your dialogs contain only shared configuration values, e.g. mydomain\james.wood which are used to dynamically find the record you need. Your dialog is then linked to a specific record, but without having the encode the source guid.

SharePoint SOAP service GetListItem with respect to updated date?

Right now I am working on a project to fetch data from a SharePoint list using SOAP API. I tried and successfully fetches the complete list, but now I want to fetch some specific data that is updated after a specific date.
Is this possible to fetch such data using SOAP query. I can see last update filed when I view single item at the bottom. Is this some how possible to use that filed?
Yes you can use the Web Services to do lot of things just like filtering a list result. I don't know which language you use, but with JavaScript you can look at these two frameworks that should help you:
http://aymkdn.github.io/SharepointPlus/ : easy way to create your queries (I created it)
http://spservices.codeplex.com/ : the most popular framework but less easy to use (it's my point of view)
You can also look at the documentation on MSDN (the param to use is query): http://msdn.microsoft.com/en-us/library/lists.lists.getlistitems.aspx
At last found the answer,
The last update date and time can be retrieved from the list column "Modified".
The soap response will have the value in the attribute "ows_Modified".
Muhammad Usman

couchdb, how to rename an attachment

I'm writing cms like application and I would like my images to be stored as attachments in couchdb.
The problem is in naming the attachments because I don't want my images to be named the same (e.g. /db/doc_id/thumb.jpg)
Ideally attachments names should depend on doc.name field. To make this work I would have to rename attachment each time user changed the name (description|alt) of current photo document.
So my question is: how to change attachment name? or maybe I should go other way in solving my problem?
Szymon,
I'd suggest considering using UUIDs for the document names, and storing the attachment as something "static" like "original", "thumb", "300wide", etc. The name given by the user when they upload the file can be stored as a key, and you can use a MapReduce index to retrieve the image/file using that name later.
If you go that route, though, you'll have to come at the "duplicates" problem a bit differently--as you could easily upload the same image multiple times with the same user-provided name and there would be no conflict.
Depending on what you're building, though, making the user provide a unique name is generally unwise--Flickr (among many others) doesn't, for instance.
If you really do need to make the doc_id == the name given by the user, then it would still be wise to store the attachments under static names, so you don't have to update the attachment name.
Lastly, if you feel you really must change the attachment name (and there are certainly cases where you need to), the simplest way is to GET the attachment from the old location (or with the document), PUT it as the new name, and DELETE the attachment with the old name.
Hope that helps!
Use this command >
curl -v http://localhost:5984/database/DocumentID/OldFileName?rev=RevisionID -X MOVE -H "Destination: NewFileName"