how to save an image to a database using Icefaces 3.0.1 FileEntry component - postgresql

I wanna use a fileentry component to save an image to a database(postgres with JPA connection) and JSF. I was searching it, so i think i have to use an inputstream, but i don't know how to use it to convert the image to a bytea type to be saved to a database in a column called for example cover's book(bytea). what code shoud be in the bean. Please help me. I have something like this:
Icefaces 3.0.1 FileEntry: FileEntryListener is never called

You're trying to do it all at once. Break it down step-by-step into pieces you can understand on their own.
First make sure that receiving the image and storing it to a local file works. There are lots of existing examples for this part.
Then figure out how to map bytea in your JPA implementation and store/retrieve files from disk. You didn't mention which JPA provider you're using, so it's hard to be specific about this.
Finally, connect the two up by writing directly from the stream, but only once you know both parts work correctly by themselves. At this point you will find that you can't actually send a stream directly to the database if you want to use a bytea field - you must fetch the whole file into memory as a byte buffer.
If you want to be able to do stream I/O with files in the database you can do that using PostgreSQL's support for "large objects". You're very unlikely to be able to work with this via JPA, though; you'll have to manage the files in the DB directly with JDBC using the PgJDBC extensions for large object suppport. You'll have to unwrap the Connection object from the connection pooler to get the underlying Connection that you can cast to PgConnection in order to access this API.
If you're stuck on any individual part, feel free to post appropriately detailed and specific questions about that part; if you link back to this question that'll help people understand the context. As it stands, this is a really broad "show me how to use technology X and Y to to Z" that would require writing (or finding) a full tutorial to answer.

Related

What's the best way to store a huge Map object populated at runtime to be reused by another tool?

I'm writing a Scala tool that encodes ~300 JSON Schema files into files of a different format and saves them to disk. These schemas I later re-need for instantiating JSON Data files, or better, I don't need all the schemas but only a few fields of each.
I was thinking that the best solution could be to populate a Map object (while the tool encodes the schemas) containing only the info that I need. And later re-use the Map object (in another run of the tool) as already compiled and populated map.
I've got two questions:
1. Is this really the most performant solution? and
2. How can I save the Map object, created at runtime, on disk as a file that can be later built/executed with the rest of my code?
I've read several posts about serialization and storing objects, but I'm not entirely sure whether these are the same as what I need. Also, I'm not sure is the best solution and I would like to hear an opinion from people with more experience than me.
What I would like to achieve is an elegant solution that allows me to lookup values from a map generated by another tool.
The whole process of compiling/building/executing sometimes is still confusing to me, so apologies if the question is trivial.
To Answer your question,
I think using an embedded KV Store would be more efficient considering the number of files and amount of traversal.
Here is a small Wiki on "How to use RocksJava". You can consider it as a KV store. https://github.com/facebook/rocksdb/wiki/RocksJava-Basics
You can use the below reference to serialize and de-serialize an object in Scala and put it as Key value pair in the RocksDB as I mentioned in the comment.
Convert Any type in scala to Array[Byte] and back
On how to use rocksDB, the below dependency in your build will suffice:
"org.rocksdb" % "rocksdbjni" % "5.17.2"
Thanks.

Need some help understanding Vocabulary of Interlinked Dataset (VoID) in Linked Open Data

I have been trying to understand VoID in Linked Open Data. It would be great if anyone could help clarify some of my confusions.
Does it need to be stored in a separate file or it can be included in the RDF dataset itself? If so, how do I query it? (A sample query would be really helpful)
How is the information in VoID used in real life?
Does it need to be stored in a separate file or it can be included in the RDF dataset itself? If so, how do I query it? (A sample query would be really helpful)
In theory not, but for practical purposes yes. In the end the information is encoded in triples, so it doesn't really matter in what file you put them and you could argue that it's best to actually put the VoID info into the data files and serve these triples with your data as meta-info. It's query-able as all other forms of RDF, either load it into some SPARQL endpoint or use a library that can directly load RDF files. This however also shows the reason why a separate file makes sense: instead of having to load potentially large data files just to get some dataset meta info, it makes sense to offer the meta-data in its own file.
How is the information in VoID used in real life?
VoID is actually used in several scenarios already, but mostly a recommendation and a good idea. The most prominent use-cases i know of is to get your dataset shown in the LOD Cloud. You currently have to register it with datahub.io and add a VoID file (example from my associations dataset).
Other examples (sadly many defunct nowadays) can be found here: http://semanticweb.org/wiki/VoID.html

MarkLogic "XDMP-FRAGTOOLARGE" error while storing 200MB+ File using REST

When i try to store a 200MB+ xml file to marklogic using REST it gives the following error "XDMP-FRAGTOOLARGE: Fragment of /testdata/upload/submit.xml too large for in-memory storage".
I have tried the Fragment Roots and Fragment Parents option but still gets the same error.
But when i store the file without '.xml' extension in uri. it saves the file but not Xquery operations can be performed on it.
MarkLogic won't be able to derive the mime from the uri without extension. It will then fall back to storing it as binary.
I think that if you would use xdmp:document-load from QConsole, you might be able to load it correctly, as that will not try to hold the entire document in memory first. It won't help you much though, you will likely hit the same error elsewhere. The REST api will have to pass it through in memory, so that won't work like this.
You could raise memory settings in the Admin UI, but you are generally better off by splitting your input. MarkLogic Content Pump (MLCP) will allow you to do so using the aggregate_record options. That will split the file into smaller pieces based on a particular element, and store these as separate documents inside MarkLogic.
HTH!

Best way to load static, local data into UITableView

I have some static data that is going to be shipped in my iPhone app's bundle. It is updated very rarely (about twice a year) and the app will not be doing any networking. I'm going to update the data manually when these changes occur.
I want to know what the best way to load this data is. I've already started using an XML file and parsing it as needed, but it's a huge amount of data to do this with. I'm finding it tedious. There are about 120 pages worth of stuff, with images etc. Just not fun.
I've heard about core data, but I don't really know if it's going to do what I want. I want to find a way to just create a UITableView controller and a detail view, then somehow bind the data to these controllers. (a teensy amount of example code would be appreciated for this part)
If anyone has any suggestions, please feel free to leave an answer or comment.
Here's a sample of my XML:
<?xml version="1.0"?>
<jftut>
<heading name="Introduction">
<subheading>Jungleflasher Overview</subheading>
<subheading>Before Using Jungleflasher</subheading>
</heading>
<heading name="Which Drive do I have?">

</heading>
<heading name="Drives">
<manufacturer name="Samsung">
<version>MS25</version>
<version>MS28</version>
</manufacturer>
<manufacturer name="Hitachi">
<version>32 through 59</version>
<version>78</version>
<version>79</version>
</manufacturer>
<manufacturer name="BenQ">
<version>VAD6038</version>
</manufacturer>
<manufacturer name="LiteOn">
<version>74850c</version>
<version>83850c v1</version>
<version>83850c v2</version>
<version>93450c</version>
</manufacturer>
</heading>
</jftut>
Below each of those bottom nodes will be an article detailing how to perform a task related to them.
If the question needs more detail, just ask :)
Thanks,
Aurum Aquila
I think core data will serve your purpose in a flexible way.You said updation is rare, even if it is not so, it would'nt be tedious while using core data,where entities are mapped in to objects and assigning values will automatically update db without writing a single line of sql statement.
I want to know in which format is your data.
First of all store all your local data in a sqlite file.Then you can use use core data as a wrapper for all your operation.
For reference you can follow tutorial 1 & Tutorial 2

In salesforce.com can you have multivalued attributes?

I am developing a Novell Identity Manager driver for Salesforce.com, and am trying to understand the Salesforce.com platform better.
I have had really good success to date. I can read pretty much arbitrary object classes out of SFDC, and create eDirectory objects for them, and what not. This is all done and working nicely. (Publisher Channel). Once I got Query events mapped out, most everything started working in the Publisher Channel.
I am now working on sending events back to SFDC (Subscriber channel) when changes occur in eDirectory.
I am using the upsert() function in the SOAP API, and with Novell Identity Manager, you basically build the SOAP doc, and can see the results as you build it. (You can do it in XSLT or you can use the various allowed tokens to build the document in DirXML Script. I am using DirXML Script which has been working well so far.).
The upshot of that comment is that I can build the SOAP document, see it, to be sure I get it right. Which is usually different than the Java/C++ approach that the sample code usually provides. Much more visual this way.
There are several things about upsert() that I do not entirely understand. I know how to blank a value, should I get that sort of event. Inside the <urn:sObjects> node, add a node like (assuming you get your namespaces declared already):
<urn1:fieldsToNull>FieldName</urn1:fieldsToNull>
I know how to add a value (AttrValue) to the attribute (FieldName), add a node like:
<FieldName>AttrValue</FieldName>
All this works and is pretty straight forward.
The question I have is, can a value in SFDC be multi-valued? In eDirectory, a multi valued attribute being changed, can happen two ways:
All values can be removed, and the new set re-added.
The single value removed can be sent as that sort of event (remove-value) or many values can be removed in one operation.
Looking at SFDC, I only ever see Multi-picklist attributes that seem to be stored in a single entry : or ; delimited. Is there another kind of multi valued attribute managed differently in SFDC? And if so, how would one manipulate it via the SOAP API?
I still have to decide if I want to map those multi-picklists to a single string, or a multi valued attribute of strings. First way is easier, second way is more useful... Hmmm... Choices...
Some references:
I have been using the page Sample SOAP messages to understand what the docs should look like.
Apex Explorer is a kicking tool for browsing the database and testing queries. Much like DBVisualizer does for JDBC connected databases. This would have been so much harder without it!
SoapUi is also required, and a lovely tool!
As far as I know there's no multi-value field other than multi-select picklists (and they map to semicolon-separated string). Generally platform encourages you to create a proper relationship with another (possibly new, custom) table if you're in need of having multiple values associated to your data.
Only other "unusual" thing I can think of is how the OwnerId field on certain objects (Case, Lead, maybe something else) can be used to point to User or Queue record. Looks weird when you are used to foreign key relationships from traditional databases. But this is not identical with what you're asking as there will be only one value at a time.
Of course you might be surpised sometimes with values you'll see in the database depending on the viewing user's locale (stuff like System Administrator profile becoming Systeembeheerder in Dutch). But this will be still a single value, translated on the fly just before the query results are sent back to you.
When I had to perform SOAP integration with SFDC, I've always used WSDL files and most of the time was fine with Java code generated out of them with Apache Axis. Hand-crafting the SOAP message yourself seems... wow, hardcore a bit. Are you sure you prefer visualisation of XML over the creation of classes, exceptions and all this stuff ready for use with one of several out-of-the-box integration methods? If they'll ever change the WSDL I need just to regenerate the classes from it; whereas changes to your SOAP message creation library might be painful...