OrientDB Could not access the security JSON file - orientdb

Following my upgrade from OrientDB 2.1.16 to 2.2.0 I have started to get the following messages during the initialisation:
2016-05-19 09:28:38:690 SEVER ODefaultServerSecurity.loadConfig() Could not access the security JSON file: /config/security.json [ODefaultServerSecurity]
2016-05-19 09:28:39:142 SEVER ODefaultServerSecurity.onAfterActivate() Configuration document is empty [ODefaultServerSecurity]
The database launched but I don't like the warnings. I've looked through the docs but I cant find anything specifically pertaining to this. There are some links on Google that lead to dead Github pages.
First of all I need to get hold of either a copy of the security.json it is expecting (or the docs explaining the expected structure).
Secondly I need to know how and where to set it.

There are 3 ways to specify the location and name of the security.json file used by the new OrientDB security module.
1) Specify the environment variable, ORIENTDB_HOME, and it will look for it here:
"${ORIENTDB_HOME}/config/security.json"
2) Set this property in the orientdb-server-config.xml file: "server.security.file"
3) Pass the location by setting the global variable -Dserver.security.file on startup.
Here's the documentation on the new features + a link to the configuration format.
https://github.com/orientechnologies/orientdb-docs/blob/master/Security-OrientDB-New-Security-Features.md
-Colin
OrientDB LTD
The Company behind OrientDB

Related

IBM i Access Client Solutions - Printer Output but using an API

I want to replicate the functionality of the IBM i Access Client Solutions "Printer Output" tool that is used to retrieve PDF's of spooled files from our IBM Db2 environment. Instead of a user interface, I want to replicate the functionality as an API.
I want to construct an API which takes inputs such as the filter parameters pictured below:
The output of the API would be PDF(s) of the printer output spooled files that match the parameters specified.
I figure that if I am able to access the i Access Printer Output tool, then I should be able to use my credentials to access the spool files using an API or something like that.
Where would I start in constructing something like this?
Also, are there any IBM guides that contain relevant information? I have looked but been unsuccessful. The Programmer's Toolkit is, also, not available with my version of i Access.
Also, I don't have developer roles, so if this is possible, it would need to be something that I can do with little authority within the IBM i servers and the Access client.
First off, IBM ACS is Java based. Thus everything it does can be found in the IBM Toolbox for Java, aka JTOpen aka JT400.
http://jt400.sourceforge.net/
Documentation https://www.ibm.com/docs/en/i/7.4?topic=java-toolbox
You're going to want to look at the reading a transformed spool file example
The transformation actually happens on the IBM i side, by specifying the appropriate workstation customization object, QCTXPDF in this case rather than the examples original QWPTIFFG4
// The following examples demonstrate how to set up a PrintParameterList to
// obtain different transformations when reading spooled file data. In the code
// segments that follow, assume a spooled file already exists on a server, and
// the createSpooledFile() method creates an instance of the SpooledFile class
// representing the spooled file.
// Create a spooled file
SpooledFile splF = createSpooledFile();
// Set up print parameter list
PrintParameterList printParms = new PrintParameterList();
printParms.setParameter(PrintObject.ATTR_WORKSTATION_CUST_OBJECT, "/QSYS.LIB/QCTXPDF.WSCST");
printParms.setParameter(PrintObject.ATTR_MFGTYPE, "*WSCST");
// Create a transformed input stream from the spooled file
PrintObjectTransformedInputStream is = splF.getTransformedInputStream(printParms);

distilbert model is not working at ktrain

I tried to use distilbert classifier. but I am getting the following error.
This is my code
(X_train,y_train),(X_test,y_test),prepro
=text.texts_from_df(train_df=data_train,text_column="Cleaned",label_columns=col
,val_df=data_test,maxlen=500,preprocess_mode="distilbert")
and here is the error
OSError: Model name 'distilbert-base-uncased' was not found in tokenizers model name list (distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-cased, distilbert-base-cased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased). We assumed 'distilbert-base-uncased' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url._
Due to my office current environmental issue, I can only work on tf 2.2 and python 3.8. Right now I am using 0.19.
Do you think it will affect my current environment if I downgrade it to 0.16?
This error may happen if there is a network or firewall issue preventing download of the tokenizer files. See this FAQ entry for remedies.
Also, when you use preprocess_mode='distilbert', texts_from* functions return TransformerDataset instances, not arrays. You'll need to replace (X_train, y_train) with train_data, for example. See this example notebook.

Is it possible to get shell properties for an item not in the shell namespace?

Short Version
How does the shell get the properties of a file?
Long Version
The Windows Shell exposes a rich system of properties about items (e.g. files and folders) in the shell namespace.
For example:
System.Title: A Quick Guide for SQL Server Native Client OLE DB to ODBC Conversion
System.Author: George Yan (KW)
System.Document.LastAuthor: rohanl
System.Comment: To learn more about this speaker, find other TEDTalks, and subscribe to this Podcast series, visit www.TED.com Feedback: tedtalks#ted.com
System.ItemParticipants: George Yan (KW)
System.Company: Contoso
System.Language: English (United States)
System.Document.DateCreated: 6/‎10/‎2014 ‏‎5∶16∶30 ᴘᴍ
System.Image.HorizontalSize: 1845 pixels
System.Image.VerticalSize: 4695 pixels
System.Image.HorizontalResolution: 71 dpi
System.Image.VerticalResolution: 71 dpi
In order for the shell to read these properties, it obviously has to use a lot of sources:
Windows Media Foundation IMFMetadata works great for images and movies
Windows Imaging Component (WIC) probably has a lot of APIs for reading metadata
I'm not sure if IFilter can retrieve Title, Author, Subject, Comments etc from Office documents
Either way, it has to read the file contents stream and do something with the contents of the file in order to get all these fancy shell properties. In other words:
IStream \
+ |--> [magic] --> IPropertyStore
.ext /
Can use it with my own stream?
I have items that are not in the shell namespace; they are in a data store. I do expose them to the shell through IDataObject as CF_FILEDESCRIPTOR with an IStream when its time to perform copy-paste or drag-drop. But outside of that they are just streamable blobs in a data store.
I'd like to be able to leverage all the existing work done by the very talented and hard-working1 shell team to read metadata from a "file", which in the end only exists as an IStream.
Is there perhaps a binding context option that lets me get a property store based on an IDataObject rather than a IShellItem2?
So rather than:
IPropertyStore ps = shellItem2.GetPropertyStore();
is there a:
IPropertyStore ps = GetShellPropertiesFromFileStream(stream);
?
How does the shell get all the properties of a file?
Bonus Chatter - IPropertyStoreFactory
This interface is typically obtained through IShellFolder::BindToObject or IShellItem::BindToHandler. It is useful for data source implementers who want to avoid the additional overhead of creating a property store through IShellItem2::GetPropertyStore. However, IShellItem2::GetPropertyStore is the recommended method to obtain a property store unless you are implementing a data source through a Shell folder extension.
Tried
IPropertyStore ps = CoCreateInstance(CLSID_PropertyStore);
IInitializeWithStream iws = ps.QueryInterface(IID_IInitializeWithStream);
But CLSID_PropertyStore does not support IInitializeWithStream.
Bonus Reading
MSDN: Initializing Property Handlers
Property handlers are a crucial part of the property system. They are invoked in-process by the indexer to read and index property values, and are also invoked by Windows Explorer in-process to read and write property values directly in the files.
MSDN: Registering and Distributing Property Handlers (spellunking the registry for fun and reading contracts from the other side)
(Have some experience in Property Store handlers) How I see a solution:
Get PropertyStore handler CLSID for your file extension. You should use 2 regkeys key:
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\PropertySystem\PropertyHandlers\.yourext
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\PropertySystem\SystemPropertyHandlers
Create two objects with CoCreateInstance
If you have 2 object you can combine them into single object with PSCreateMultiplexPropertyStore
Query for IInitializeWithStream (also you can try to query IPersistStream).
If the PropertyStore object supports IInitializeWithStream/IPersistStream: you are lucky - just init your object and query the properties you need. If does not - you still have (dirty) variant to create temporary file and then use IPersistFile.

Unable to run experiment on Azure ML Studio after copying from different workspace

My simple experiment reads from an Azure Storage Table, Selects a few columns and writes to another Azure Storage Table. This experiment runs fine on the Workspace (Let's call it workspace1).
Now I need to move this experiment as is to another workspace(Call it WorkSpace2) using Powershell and need to be able to run the experiment.
I am currently using this Library - https://github.com/hning86/azuremlps
Problem :
When I Copy the experiment using 'Copy-AmlExperiment' from WorkSpace 1 to WorkSpace 2, the experiment and all it's properties get copied except the Azure Table Account Key.
Now, this experiment runs fine if I manually enter the account Key for the Import/Export Modules on studio.azureml.net
But I am unable to perform this via powershell. If I Export(Export-AmlExperimentGraph) the copied experiment from WorkSpace2 as a JSON and insert the AccountKey into the JSON file and Import(Import-AmlExperiment) it into WorkSpace 2. The experiment fails to run.
On PowerShell I get an "Internal Server Error : 500".
While running on studio.azureml.net, I get the notification as "Your experiment cannot be run because it has been updated in another session. Please re-open this experiment to see the latest version."
Is there anyway to move an experiment with external dependencies to another workspace and run it?
Edit : I think the problem is something to do with how the experiment handles the AccountKey. When I enter it manually, it's converted into a JSON array comprising of RecordKey and IndexInRecord. But when I upload the JSON experiment with the accountKey, it continues to remain the same and does not get resolved into RecordKey and IndexInRecord.
For me publishing the experiment as a private experiment for the cortana gallery is one of the most useful options. Only the people with the link can see and add the experiment for the gallery. On the below link I've explained the steps I followed.
https://naadispeaks.wordpress.com/2017/08/14/copying-migrating-azureml-experiments/
When the experiment is copied, the pwd is wiped for security reasons. If you want to programmatically inject it back, you have to set another metadata field to signal that this is a plain-text password, not an encrypted password that you are setting. If you export the experiment in JSON format, you can easily figure this out.
I think I found the issue why you are unable to export the credentials back.
Export the JSON graph into your local disk, then update whatever parameter has to be updated.
Also, you will notice that the credentials are stored as 'Placeholders' instead of 'Literals'. Hence it makes sense to change them to Literals instead of placeholders.
This you can do by traversing through the JSON to find the relevant parameters you need to update.
Here is a brief illustration.
Changing the Placeholder to a Literal:

Error showing at tAccessInput component

While using tAccessInput component in my job, it showing error like,
It may not be a database that your application recognizes, or the file may be corrupt.
But, it all the connections and database/table names are valid in my job.
What can be the problem. and how can i resolve it.
Metioning the exact DBVersion is more important for tAccessInput/tAccessOutput.
Like Access 2003/Access 2007 and associated database file name.