Is there possible to add a directory to Kodi using the command line? I've been looking for this with no luck so far.
What I'm looking for is to automate the process of adding a directory manually using command line. For some reason this doesn't appear to be a popular question out there; am I missing something?
Crawling the Kodi/XBMC forums and wiki show a few options... Here's what I've gathered...
Edit the Database Directly (not recommended)
Kodi stores this information in a sqlite database, however this location would be pretty tricky to manipulate yourself as it would require both knowledge of the path of each sqlite database file as well as the relationship of each column/table in each database file (assuming it's a strictly relational database file, which most are).
For example:
sqlite3 <path_to_kodi_preferences>/userdata/Database/MyVideos119.db
sqlite> .tables
actor movie_view studio_link
actor_link movielinktvshow tag
art musicvideo tag_link
bookmark musicvideo_view tvshow
country path tvshow_view
country_link rating tvshowcounts
director_link season_view tvshowlinkpath
episode seasons tvshowlinkpath_minview
episode_view sets uniqueid
files settings version
genre stacktimes writer_link
genre_link streamdetails
movie studio
Edit sources.xml
The official wiki mentions <path_to_kodi_preferences>/userdata/sources.xml for this but it still assumes you know how to manipulate an XML file programmatically and the community warns that this is potentially "invasive" and that the official addons/plugins aren't allowed to use this technique.
I dove into this and the XML seems like the way to go, for example, to add Videos:
<video>
<default pathversion="1"></default>
<source>
<name>Movies</name>
<path pathversion="1">/home/ubuntu/Movies/</path>
<allowsharing>true</allowsharing>
</source>
<source>
<name>Video Playlists</name>
<path pathversion="1">special://videoplaylists/</path>
<allowsharing>true</allowsharing>
</source>
+ <source>
+ <name>MyCustomDirectory</name>
+ <path pathversion="1">/home/ubuntu/MyCustomDirectory/</path>
+ <allowsharing>true</allowsharing>
+ </source>
</video>
... however comments suggest Kodi needs to be restarted and that this location still needs to be crawled/refreshed. There may be some "watchdog" add-ons that can do this for you.
Use the Add-On API
Another technique is to use the official Python API, such as through UpdateLibrary(database, path) however examples usually involve Python to call the API directly. Here's an example from the PlexKodiConnect GitHub project:
# Make sure Kodi knows we wiped the databases
xbmc.executebuiltin('UpdateLibrary(video)')
if utils.settings('enableMusic') == 'true':
xbmc.executebuiltin('UpdateLibrary(music)')
Since the simplest solution is often the best, I would recommend working on a way to automate the settings.xml file. Modifying XML files is well-documented in nearly all programming languages (to that point, you could technically brute-force and just inject an XML string at the given place without an xml parser 😈) and then handle the restart and refresh operations once the XML is confirmed as being updated properly.
Related
I am building this Gtkmm3 application in Ubuntu and wanted to explore GSettings. All was going well while following the instructions at the 'Using GSettings' page and then it was time to configure the make files. I use Eclipse 2019-12 IDE with CDT (V9.10) and 'GNU Make Builder' as the builder. I'm totally perplexed as to how to introduce the macros listed in the GNOME page into the make files. I even tried changing the project to a 'C/C++ Autotools Project' using Eclipse but still the necessary make files to add the macros were missing. Creating a new project with GNU Autotools does create the necessary make files but I was not able to get pkg-config to work with it.
Can anyone point me to some resource which explains how to compile the schema and how & where to load the resultant binary file (externally if necessary). I'll consider myself blessed if someone has already made a Gtkmm3 C++ application with GSettings support using Eclipse IDE in Linux and can share the details.
Finally I figured. Thought I'll share my findings here. Actually some one out there had explained this for python (link below).
Using GSettings with Python/PyGObject
Creating the schema
For the developer the work starts with defining a schema for the settings. A schema is an XML file that looks something like this.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE schemalist SYSTEM "gio_gschema.dtd" >
<schemalist>
<schema id="org.gtk.skanray.emlibrary"
path="/org/skanray/emlibrary/" gettext-domain="emlibrary">
<key name="wave-pressure-ptrach-visible" type="b">
<default>true</default>
<summary>Set visibility of 'Ptrach' trace in pressure waveform.</summary>
<description>The pressure waveform shows multiple traces where 'PAW' is always enabled and additionally 'Ptrach' can be displayed. This settings affects the visibility of the trachial pressure trace shown in this waveform channel.</description></key>
</schema>
</schemalist>
The file name has to have a ‘.gschema.xml’ suffix. The schema file should be in the project path, only so that it gets pushed to SVN.
Best would be to use an XML editor (e.g. Eclipse) that supports design of XML files from a DTD file. Use following DTD file.
gschema.dtd
It is possible to store anything derived from GVariant into GSettings. Refer to following page to understand the basic types and the ‘type’ attribute to be used in the schema.
GVariant Format Strings
Compiling the schema
With the schema ready, (sudo) copy it into /usr/share/glib-2.0/schemas/ then run,
> sudo glib-compile-schemas /usr/share/glib-2.0/schemas/
At this point, the newly added settings can be seen / modified using dconf editor.
Accessing GSettings from the application
Coming to the main event of the show, this is how an application can read ( and / or write) settings. It is not necessary that one needs to bind property of an object to a ‘key’ in GSettings, it may be queried and used as well. Refer to GSettings API reference for details.
Glib::RefPtr <Gio::Settings> refSettings = Gio::Settings::create(“org.gtk.skanray.emlibrary”);
CLineTrace * pTrace = NULL; // CLineTrace is derived from Gtk::Widget
…
pTrace = …
…
if(refSettings)
{
refSettings->bind("wave-pressure-ptrach-visible",
pTrace,
"visible",
Gio::SETTINGS_BIND_DEFAULT);
}
Now you can fire up dconf editor and test the settings.
NOTE
Bindings are usually preferred to be made in class constructors. However binding to ‘visible’ property of a widget could be a bit tricky. Typically the top level window does a show_all() as the last line in its constructor. However constructors of the children of top level window would have completed executing including making the bindings. If there were settings that had stored ‘visibility’ as false then the top level window’s call to show_all() would mess up with that setting. In such cases it is advised to perform the bind one time in the on_map() handler of the respective class.
I'm sorry if this doesn't have enough information. I don't typically ask for help online like this.
I'm using DITA Open Toolkit 3.4 on Windows. I generated a plugin called "vcr2" using Jarno's (very excellent and helpful) PDF Plugin Generator and then made a handful of customizations. The plugin uses the pdf2 plugin as a base. When I try to use the vcr2 plugin, my images are not working. I've tracked the problem down to malformed image filenames in the image's href attribute.
For example:
In my source file (a DITA Task), the markup for one of my images looks like this:
<image href="MyRemindersChooseReminder.png"/>
If I run a transform with the pdf2 plugin, the images work fine. In the merged stage1.xml file in the Temp folder, the XML for that same image looks like this:
<image class="- topic/image " href="df2d132af27436c59c5c8c4282e112d62bec8201.png" placement="inline" xtrc="image:1;10:66" xtrf="file:/V:/Vasont/Extract/t12340879-minimal/t12340879.xml"/>
It is processed into a file Topic.fo, and looks like this:
<fo:external-graphic
src="url('file:/V:/Vasont/Extract/t12340879-minimal/MyRemindersChooseReminder.png')"/>
Everything works fine and the image looks fine.
If I run the same file through my 'vcr2' plugin, which just calls the same pdf2 plugin with some overrides, all the images get broken:
stage1.xml
<image class="- topic/image " href="df2d132af27436c59c5c8c4282e112d62bec8201.png" placement="inline" xtrc="image:1;10:66" xtrf="file:/V:/Vasont/Extract/t12340879-minimal/t12340879.xml"/>
Topic.fo
<fo:external-graphic
src="url('file:/V:/Vasont/Extract/t12340879-minimal/df2d132af27436c59c5c8c4282e112d62bec8201.png')"
/>
As I track this down further, it appears that somewhere in the map-reader Ant task, this filename gets changed to that cryptic string of pseudo-hexadecimal. I think later on it's supposed to be changed back or resolved to a complete URI or something.
So, the two-part question is: Why does Open Toolkit change my filenames, and what's supposed to change them back?
DITA-OT's preprocess uses hashes for temporary filenames because it allows the code to not deal with directory structures. This enables preprocess to work in so-called "map-first" mode, where it first processes all DITA map resources and only then starts to process DITA topic and image resources.
The preprocess has a step called clean-preprocess that can rewrite the temporary file names to match source resource files names. However, this rewrite operation is disabled for PDF output because the original file names are not used for anything in that output type.
I'm sure this is a softball for those who are familiar with the Elastic Stack, but the docs I've read havent left it super clear.
I essentially am trying to push pcap files through the ELK stack to visualize packet information using Kibana.
I am not looking to monitor this real time, but rather have the following behavior:
I drop a pcap into a directory, and something picks it up (FileBeat? PacketBeat -I? LogStash?)
Since a pcap file isn't really useful, I might need to run it through tshark to produce readable json
I want this information in ElasticSearch
Use Kibana to make pretty graphs
From what I read PacketBeat allows for the -I option to take a pcap file as an input, but doesn't that only ship that single file? I want it to watch a directory as I drop pcaps. I guess what confused me is most of the docs talk about configuring an interface device to sniff in the packetbeat.yml
Anyway ideally I was thinking it would look something like this
packetbeat(watching for pcaps, spits out json) -> logstash (filters)-> elasticsearch (indexes)-> kibana (visualizes)
Is there a way to configure packetbeat to watch a dir for pcaps rather than an interface?
As of March 2021, you still can't do this natively with Packetbeat.
But you can easily "outsource" the watching of a directory tree to another tool, and have it call Packetbeat. Watchman (released by Facebook) is a good choice - it will keep track of files that have been processed. Then you could do something like the following to a) watch a directory and then b) take action when files are changed/added:
watchman watch /path/to/pcaps
watchman -- trigger ~/path/to/pcaps pcaptrigger '*.pcap' -- 'packetbeat -I'
Am working on a project to build an opc ua server from specification,
I've gone far enough on the implementation, am currently working on the write request, I already have a few nodes in the server address space.
There seem to be so many nodes, so many actually. It's almost impossible to create
and add the Nodes one by one.
Anyways back to the question, I've downloaded an xml file from opc foundation containing schema for all the nodes in the address space, Here is a link to the xml file
What is the most efficient way to create nodes from the xml file ? I am writing on a c95 compiler.
Below is a quick view of how Nodes are represented in the nodeset Xml file,
<Nodes>
<Node i:type="DataTypeNode">
<NodeId>
<Identifier>i=1</Identifier>
</NodeId>
<NodeClass>DataType_64</NodeClass>
<BrowseName>
<NamespaceIndex>0</NamespaceIndex>
<Name>Boolean</Name>
</BrowseName>
<DisplayName>
<Locale></Locale>
<Text>Boolean</Text>
</DisplayName>
<Description>
<Locale></Locale>
<Text>Describes a value that is either TRUE or FALSE.</Text>
</Description>
<WriteMask>0</WriteMask>
<UserWriteMask>0</UserWriteMask>
<RolePermissions />
<UserRolePermissions />
<AccessRestrictions>0</AccessRestrictions>
<References>
<ReferenceNode>
<ReferenceTypeId>
<Identifier>i=45</Identifier>
</ReferenceTypeId>
<IsInverse>true</IsInverse>
<TargetId>
<Identifier>i=24</Identifier>
</TargetId>
</ReferenceNode>
</References>
<IsAbstract>false</IsAbstract>
<DataTypeDefinition i:nil="true" />
</Node>
Programatically filling a running OPC-UA server with nodes is unacceptably slow.
you may want to investigate the ModelCompiler.
I found it fairly straightforward to fill a modeldesign XML with data and generate code and NodeSet2.xml. So even if you have no need for the generated C# code, which I suspect to be your case, this approach may be useful.
You may also want to look at the UA-.NETStandard repository.
It offers a method LoadFromXML method that reads your nodeset pretty quickly. You may find inspiration in this method.
Bon courage et un grand merci pour vos contributions au monde OPC-UA.
Maybe I'm a bit late, but I answer if it can help someone.
If you are using C/C++ with open62541 SDK I found that it is possible to generate *.c and *.h files to include in your opcua server, as described with some examples here: you only need to run a python program, providing some parameters and the name of output files to be generated, then include these files in your opcua server.
Another way that I found is using UaModeler by Unified Automation, in that case you can generate source files to include in your project, drawing your information model in the program and exporting it to xml or source files.
I want to use SOLR's remote-streaming facility to extract and index the content of files.
This works fine if I pass stream.file=xxx as a parameter to the http GET method.
However, I have a lot of these, and want to batch them up (i.e. not have to have a GET per file).
Is there a way I can do this in SOLR?
e.g. I'd like to be able to POST some xml like this:
<add>
<doc stream_file="filename">
<field name="id">123</field>
</doc>
<doc>...
This has been recently asked (and answered) in the solr-user mailing list.
I find that multiple ADDs are fast, so long as you only COMMIT the batch and don't try to COMMIT after every ADD. I would guess that the performance penalty is not worth writing your own RequestHandler.