ServiceNow REST API: Get list of column names - rest

From the admin UI, there is a Tables and Columns explorer that dutifully shows all available columns for a table, such as the incident table:
My ultimate goal is to be able to query all fields for a given table that I can insert data to (mostly centered around incident and problem tables), match that against what data I have, and then insert the record with a PUT to the table. The immediate problem I am having is that when I query sys_dictionary as various forums suggest, I only get returned a subset of the columns the UI displays.
Postman query:
https://{{SNOW_INSTANCE}}.service-now.com/api/now/table/sys_dictionary?sysparm_fields=internal_type,sys_name,name,read_only,max_length,active,mandatory,comments,sys_created_by,element&name={{TableName}}&sysparm_display_value=all
I understand the reduced result set has something to do with them being real columns in the table vs. links to other tables but I can't find any documentation describing how to get the result set that the UI has using the REST api.
The follow on problem is that I can't find an example with an example payload where all standard fields have been filled out for the incident table so that I can populate as many fields as I have data for.

The reason you don't get all the columns back is because the table you are querying inherits from another table. You need to go through all the inheritance relationships first, finding all parent tables, then query the sys_dictionary for all of those tables.
In the case of the incident table, you need to query the sys_db_object table (table of all tables) to find the parent, which is the task table. Then query the sys_db_object table again to find its parent, which is empty, so we have all the relevant tables: incident and task. Obviously, you would want to write this code as a loop, building up a list of tables by querying the table at the end of the list.
Once you have this list, you can query sys_dictionary with the query: sysparm_query=name=incident^ORname=task, which should return your full list of columns.

I think you could do this by creating your own scripted rest api and iterating/inspecting the fields:
(function process(/*RESTAPIRequest*/ request, /*RESTAPIResponse*/ response) {
var queryParams = request.queryParams;
var table = queryParams.table;
var t = new GlideRecord(table);
t.initialize();
var fields = t.getElements(); //or getFields if global scope
var fieldList = [];
for (var i = 0; i < fields.length; i++) {
var glideElement = fields[i]; //or field.get(i) if global scope
var descriptor = glideElement.getED();
var fldName = glideElement.getName().toString();
var fldLabel = descriptor.getLabel().toString();
var fldType = descriptor.getInternalType().toString();
var canWrite = glideElement.canWrite();
if (canWrite){
fieldList.push({
name: fldName,
type: fldType,
label: fldLabel,
writable: canWrite
});
}
}
return fieldList;
})(request, response);
This should save you the hassle of determining the inheritance of fields. Here's the sample output:
{
"result": [
{
"name": "parent",
"type": "reference",
"label": "Parent",
"writable": true
},
{
"name": "made_sla",
"type": "boolean",
"label": "Made SLA",
"writable": true
},
...

Related

Get existing and inserted IDs in upsert operation (db.Clauses clause.OnConflict)

I have a scenario where I need to insert into a table some array of data, In which case if the combination of name and version already exists (composite unique constrain), I need to get those IDs, else get inserted IDs, if both case exist get inserted and existing ids
Models and code I tried are given below:
Model Dependency
type Dependency struct {
gorm.Model
ID string `gorm:"primaryKey; not null"`
Name string `gorm:"not null; UniqueIndex:idx_name_version"`
Version string `gorm:"not null; UniqueIndex:idx_name_version"`
}
go code with gorm query to insert dependencies
var saveDependencyData []models.Dependency
// Dependecies are read form api input
// [
// {
// "name": "node",
// "version": "16.0.0"
// },
// {
// "name": "node",
// "version": "18.0.0"
// }
// ]
for _, dep := range Dependecies {
saveDependencyData = append(saveDependencyData, models.Dependency{
ID: nanoid.New(),
Name: dep.Name,
Version: dep.Version,
})
}
res := db.Clauses(clause.OnConflict{
Columns: []clause.Column{{Name: "name"}, {Name: "version"}},
DoUpdates: clause.AssignmentColumns([]string{"name"}),
}).Create(saveDependencyData)
gorm query output
INSERT INTO "dependencies" ("id","created_at","updated_at","deleted_at","name","version") VALUES ('QanL-nfNFrOGdxG2iXdoQ','2022-10-06 19:21:13.079','2022-10-06 19:21:13.079',NULL,'react','16.0.0'),('Yw1YyQ-aBqrQtwZ72GNtB','2022-10-06 19:21:13.079','2022-10-06 19:21:13.079',NULL,'react','18.0.0') ON CONFLICT ("name","version") DO UPDATE SET "name"="excluded"."name" RETURNING "id"
This query returns the list of ids I needed, but could not find a way to retrieve that.
using Scan() gets all the datas in that table.
Either you can help with a way to retrieve the returning IDs form the above GORM db.Clauses(), Or any other optimized method to get those (inserted & existing) ids with a upsert query.
As indicated in the comments: Several functions of GORM expect a pointer as argument and will update the variable with information.
That's obviously the case for all functions whose main purpose is to retrieve information (First, Find, ..., cf. https://gorm.io/docs/query.html).
But it's also the case for functions that modify data like
Create (https://gorm.io/docs/create.html),
Update (https://gorm.io/docs/update.html#Returning-Data-From-Modified-Rows) or
Delete (https://gorm.io/docs/delete.html#Returning-Data-From-Deleted-Rows)
So, the solution in this case is to pass Create(&saveDependencyData) instead of Create(saveDependencyData).
The up-to-date information corresponding to the database will then be available in the saveDependencyData after the call.

How to get finder input values ​from 3 datatables with the same class?

Help with datatables, I have 3 tables that have the .dataTables class, I have each table in a tab, what I want to do is obtain the values ​​entered in the search engine of each table, I investigated a function that allows me to obtain the data but this function only allows me to obtain the data of the first table, when trying to search in the others it accesses my function but it returns me this result How else could I get the result of each search engine from the 3 tables.
<empty string>
How can I get the result of each search engine from the 3 tables?
var prueba = $('.dataTableCls').DataTable();
prueba.on('search.dt', function() {
var value = $('.dataTables_filter input').val();
console.log(value);
});

DynamoDB - How to upsert nested objects with updateItem

Hi I am newbie to dynamoDB. Below is the schema of the dynamo table
{
"user_id":1, // partition key
"dob":"1991-09-12", // sort key
"movies_watched":{
"1":{
"movie_name":"twilight",
"movie_released_year":"1990",
"movie_genre":"action"
},
"2":{
"movie_name":"harry potter",
"movie_released_year":"1996",
"movie_genre":"action"
},
"3":{
"movie_name":"lalaland",
"movie_released_year":"1998",
"movie_genre":"action"
},
"4":{
"movie_name":"serendipity",
"movie_released_year":"1999",
"movie_genre":"action"
}
}
..... 6 more attributes
}
I want to insert a new item if the item(that user id with dob) did not exist, otherwise add the movies to existing movies_watched map by checking if the movie is not already available the movies_watched map .
Currently, I am trying to use update(params) method.
Below is my approach:
function getInsertQuery (item) {
const exp = {
UpdateExpression: 'set',
ExpressionAttributeNames: {},
ExpressionAttributeValues: {}
}
Object.entries(item).forEach(([key, item]) => {
if (key !== 'user_id' && key !== 'dob' && key !== 'movies_watched') {
exp.UpdateExpression += ` #${key} = :${key},`
exp.ExpressionAttributeNames[`#${key}`] = key
exp.ExpressionAttributeValues[`:${key}`] = item
}
})
let i = 0
Object.entries(item. movies_watched).forEach(([key, item]) => {
exp.UpdateExpression += ` movies_watched.#uniqueID${i} = :uniqueID${i},`
exp.ExpressionAttributeNames[`#uniqueID${i}`] = key
exp.ExpressionAttributeValues[`:uniqueID${i}`] = item
i++
})
exp.UpdateExpression = exp.UpdateExpression.slice(0, -1)
return exp
}
The above method just creates update expression with expression names and values for all top level attributes as well as nested attributes (with document path).
It works well if the item is already available by updating movies_watched map. But throws exception if the item is not available and while inserting. Below is exception:
The document path provided in the update expression is invalid for update
However, I am still not sure how to check for duplicate movies in movies_watched map
Could someone guide me in right direction, any help is highly appreciated!
Thanks in advance
There is no way to do this, given your model, without reading an item from DDB before an update (at that point the process is trivial). If you don't want to impose this additional read capacity on your table for update, then you would need to re-design your data model:
You can change movies_watched to be a Set and hold references to movies. Caveat is that Set can contain only Numbers or Strings, thus you would have movie id or name or keep the data but as JSON Strings in your Set and then parse it back into JSON on read. With SET you can perform ADD operation on the movies_watched attribute. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.UpdateExpressions.html#Expressions.UpdateExpressions.ADD
You can go with single table design approach and have these movies watched as separate items with (PK:userId and SK:movie_id). To get a user you would perform a query and specify only PK=userId -> you will get a collection where one item is your user record and others are movies_watched. If you are new to DynamoDB and are learning the ropes, then I would suggest go with this approach. https://www.alexdebrie.com/posts/dynamodb-single-table/

how to convert map<anydata> to json

In my CRUD Rest Service I do an insert into a DB and want to respond to the caller with the created new record. I am looking for a nice way to convert the map to json.
I am running on ballerina 0.991.0 and using a postgreSQL.
The return of the Update ("INSERT ...") is a map.
I tried with convert and stamp but i did not work for me.
import ballerinax/jdbc;
...
jdbc:Client certificateDB = new({
url: "jdbc:postgresql://localhost:5432/certificatedb",
username: "USER",
password: "PASS",
poolOptions: { maximumPoolSize: 5 },
dbOptions: { useSSL: false }
}); ...
var ret = certificateDB->update("INSERT INTO certificates(certificate, typ, scope_) VALUES (?, ?, ?)", certificate, typ, scope_);
// here is the data, it is map<anydata>
ret.generatedKeys
map should know which data type it is, right?
then it should be easy to convert it to json like this:
{"certificate":"{certificate:
"-----BEGIN
CERTIFICATE-----\nMIIFJjCCA...tox36A7HFmlYDQ1ozh+tLI=\n-----END
CERTIFICATE-----", typ: "mqttCertificate", scope_: "QARC", id_:
223}"}
Right now i do a foreach an build the json manually. Quite ugly. Maybe somebody has some tips to do this in a nice way.
It cannot be excluded that it is due to my lack of programming skills :-)
The return value of JDBC update remote function is sql:UpdateResult|error.
The sql:UpdateResult is a record with two fields. (Refer https://ballerina.io/learn/api-docs/ballerina/sql.html#UpdateResult)
UpdatedRowCount of type int- The number of rows which got affected/updated due to the given statement execution
generatedKeys of type map - This contains a map of auto generated column values due to the update operation (only if the corresponding table has auto generated columns). The data is given as key value pairs of column name and column value. So this map contains only the auto generated column values.
But your requirement is to get the entire row which is inserted by the given update function. It can’t be returned with the update operation if self. To get that you have to execute the jdbc select operation with the matching criteria. The select operation will return a table or an error. That table can be converted to a json easily using convert() function.
For example: Lets say the certificates table has a auto generated primary key column name ‘cert_id’. Then you can retrieve that id value using below code.
int generatedID = <int>updateRet.generatedKeys.CERT_ID;
Then use that generated id to query the data.
var ret = certificateDB->select(“SELECT certificate, typ, scope_ FROM certificates where id = ?”, (), generatedID);
json convertedJson = {};
if (ret is table<record {}>) {
var jsonConversionResult = json.convert(ret);
if (jsonConversionResult is json) {
convertedJson = jsonConversionResult;
}
}
Refer the example https://ballerina.io/learn/by-example/jdbc-client-crud-operations.html for more details.?

MongoDB C# offic. List<BsonObject> query issue and always olds values?

I have not clearly issue during query using two criterials like Id and Other. I use a Repository storing some data like id,iso,value. I have created an index("_id","Iso") to performs queries but queries are only returning my cursor if i use only one criterial like _id, but is returning nothing if a use two (_id, Iso) (commented code).
Are the index affecting the response or the query method are failing?
use :v1.6.5 and C# official.
Sample.
//Getting Data
public List<BsonObject> Get_object(string ID, string Iso)
{
using (var helper = BsonHelper.Create())
{
//helper.Db.Repository.EnsureIndex("_Id","Iso");
var query = Query.EQ("_Id", ID);
//if (!String.IsNullOrEmpty(Iso))
// query = Query.And(query, Query.EQ("Iso", Iso));
var cursor = helper.Db.Repository.FindAs<BsonObject>(query);
return cursor.ToList();
}
}
Data:
{
"_id": "2345019",
"Iso": "UK",
"Data": "Some data"
}
After that I have Updated my data using Update.Set() methods. I can see the changed data using MongoView. The new data are correct but the query is always returning the sames olds values. To see these values i use a page that can eventually cached, but if add a timestamp at end are not changing anything, page is always returning the same olds data. Your comments are welcome, thanks.
I do not recall offhand how the C# driver creates indexes, but the shell command for creating an index is like this:
db.things.ensureIndex({j:1});
Notice the '1' which is like saying 'true'.
In your code, you have:
helper.Db.Repository.EnsureIndex("_Id","Iso");
Perhaps it should be:
helper.Db.Repository.EnsureIndex("_Id", 1);
helper.Db.Repository.EnsureIndex("Iso", 1);
It could also be related to the fact that you are creating indexes on "_Id" and the actual id field is called "_id" ... MongoDB is case sensitive.
Have a quick look through the index documentation: http://www.mongodb.org/display/DOCS/Indexes