List all tables in an mdb file - jackcess

Is there any way to list the names of all of the tables in an MDB file? I am attempting to create a program that tests the user on Quizbowl questions. I would like to organize the questions and answers so that each question set is located within its own table. Simply put, I am unfamiliar with the API for Jackcess - I have tried searching to see if there is a method that would do this, but have failed.
Thank you.

Simply use the .getTableNames() method of the Database object, like this:
import java.io.File;
import com.healthmarketscience.jackcess.Database;
// ...
String dbFileSpec = "C:/Users/Public/mdbTest.mdb";
try (Database db = DatabaseBuilder.open(new File(dbFileSpec))) {
for (String tableName : db.getTableNames()) {
System.out.println(tableName);
}
} catch (Exception e) {
e.printStackTrace(System.err);
}

Related

How to seed images in the database asp.net core 2.1

I have following database schema and I would like to seed the data in the database but I could not understand how to seed the images at first go, what should be in the table entity.
I need help to know where I need changes.
Thanks.
Since you referenced Core, here's the easiest way (in Program.Main)
try
{
var host = BuildWebHost(args);
using (var scope = host.Services.CreateScope())
{
var services = scope.ServiceProvider;
try
{
var context = services.GetRequiredService<myContext>();
DbInitializer.Seed(context);
}
catch
{
throw;
}
}
host.Run();
}
catch
{
throw;
}
and create a class called DbInitializer with a method Seed that takes an EF context. I think you can take it from there.
(and don't post images of code, post the code using Ctrl+K to format code-blocks)

Simple select from MongoDB with vibed

I am learning how to use MongoDB from vibed. I wrote simple app, that as I am thinking should do find operation. But when I run it I am getting error: Querying uninitialized MongoCollection.. What I am doing wrong?
import vibe.core.log;
import vibe.db.mongo.mongo;
import vibe.d;
import std.stdio;
import std.array;
void main()
{
MongoCollection m_posts;
foreach(p;m_posts.find("{}"))
{
writeln(p);
}
}
There is a mongo example in vibe.d repository.
It comes down to this pattern:
void main()
{
auto db = connectMongoDB("localhost").getDatabase("test");
auto coll = db["collection"];
foreach (i, doc; coll.find("{}"))
writeln("Item %d: %s", i, doc.toJson().toString());
}
In your snippet you have attempted to use collection object without actually connecting to the database and retrieving it from there. This is exactly what error is about.
You just created the MongoCollection object and did not initialized it with anything. That's why the error is about an "Uninitialized Collection". You should connect it to a database and put some data in it. Have a look at http://vibed.org/api/vibe.db.mongo.collection/MongoCollection for examples.

Why CRUD generator Twig_Error_Loader fails me in symfony2.3?

I'm doing my own Crud Generate in Symfony 2.3. This is my code.
namespace Gotakey\BackendBundle\Command;
use Sensio\Bundle\GeneratorBundle\Generator\DoctrineCrudGenerator;
use Sensio\Bundle\GeneratorBundle\Command\GenerateDoctrineCrudCommand;
class MyDoctrineCrudCommand extends GenerateDoctrineCrudCommand
{
protected $generator;
protected function configure()
{
parent::configure();
$this->setName('gotakey:generate:crud');
$this->setDescription('My Crud generate');
}
protected function getGenerator($bundle = null)
{
$generator = new DoctrineCrudGenerator($this->getContainer()->get('filesystem'), __DIR__.'/../Resources/crud');
$this->setGenerator($generator);
return parent::getGenerator();
}
}
I have the skeleton in my Bundle /src/Gotakey/BackendBundle/Resources/crud. When I run the command line, Displays the following error.
[Twig_Error_Loader]
The "" directory does not exist.
Anyone know what I'm doing wrong.
Thanks and sorry for my english. I'm not expert
After much reading, I did. I created a folder with the following structure. APP_PATH/Resources/SensioGeneratorBundle/skeleton/crud/views/.
I created the folder with the files views: edit.html.twig.twig, index.html.twig.twig ...
More information: http://symfony.com/doc/current/bundles/SensioGeneratorBundle/index.html

How to retrieve dynamic content from database?

I'm evaluating java-based cms and selecting one as our cms ,now I'm learning dotcms , I need to know how to retrieve content from db like traditional jsp/bo does,I'm new to dotcms, the official documents only tell how to add static content but dynamic content , say running a sql and getting the wanted data ,then putting them into pages. We are doing an internal website where employees can browse news, events, colleagues information etc which managed through a cms, the information is definitely dynamic and updated regularly. We plan to use spring mvc on the project. Any ideas on the question?
thank you.
To get this to work you need to do a few things:
If you want to use a different database, then you can add a new resource to the conf/Catalina/localhost/ROOT.xml file. If you want to use the dotCMS database to host the additional tables then you can skip this step.
From within your java code you can get a database connection using the DbConnectionFactory class. Now you can read the data from the database. Here's an example:
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
Connection conn = DbConnectionFactory.getConnection();
Statement selectStatement;
try {
selectStatement = conn.createStatement();
try {
selectStatement.execute("SELECT * FROM your_table WHERE your_where_clause etc...");
ResultSet result = selectStatement.getResultSet();
if (result.next()) {
.. do your stuff here...
// for example:
// Long dbId = result.getLong("Id");
// String stringField = result.getString("stringFieldName");
// int intField = result.getInt("intFieldName");
} finally {
selectStatement.close();
}
} catch (SQLException e1) {
// Log the error here
}
}
If you want to use this data in velocity you'll need to create a viewtool. Read more about that here: http://dotcms.com/docs/latest/DynamicPluginsViewtool

How to use the sqoop generated class in MapReduce?

A sqoop query generates a java file that contains a class that contains the code to get access in mapreduce to the columns data for each row. (the Sqoop import was done in text without the --as-sequencefile option, and with 1 line per record and commas between the columns)
But how do we actually use it?
I found a public method parse() in this class that takes Text as an input and populates all the members of the class , so to practice I modified the wordcount application to convert a line of text from the TextInputFormat in the mapper into an instnace of the class generated by sqoop. But that causes an "unreported exception.com.cloudera.sqoop.lib.RecordParser.ParseError; must be caught or declared to be thrown" when I call the parse() method.
Can it be done this way or is a custom InputFormat necessary to populate the class with the data from each record ?
Ok this seems obvious once you find out but as a java beginner this can take time.
First configure your project:
just add the sqoop generated .java file in your source folder.
I use eclipse to import it in my class source folder.
Then just make sure you configured your project's java build path correctly:
Add the following jar files in the project's properties/java build path/libraries/add external jar:
(for hadoop cdh4+) :
/usr/lib/hadoop/hadoop-common.jar
/usr/lib/hadoop-[version]-mapreduce/hadoop-core.jar
/usr/lib/sqoop/sqoop-[sqoop-version]-cdh[cdh-version].jar
Then adapt your mapreduce source code:
First configure it:
public int run(String [] args) throws exception
{
Job job = new Job(getConf());
job.setJarByClass(YourClass.class);
job.setMapperClass(SqoopImportMap.class);
job.setReducerClass(SqoopImprtReduce.class);
FileInputFormat.addInputPath((job,"hdfs_path_to_your_sqoop_imported_file"));
FileOutputFormat.setOutputPath((job,"hdfs_output_path"));
// I simply use text as output for the mapper but it can be any class you designed
// as long as you implement it as a Writable
job.setMapOutputKeyClass(Text.Class);
job.setMapOutputValueClass(Text.Class);
job.setOutputKeyClass(Text.Class);
job.setOutputValueClass(Text.Class);
...
Now configure your mapper class.
Let's assume your sqoop imported java file is called Sqimp.java:
and the table you imported had the following columns: id, name, age
your mapper class should look like this:
public static class SqoopImportMap
extends Mapper<LongWritable, Text, Text, Text>
{
public void map(LongWritable k, Text v, Context context)
{
Sqimp s = new Sqimp();
try
{
// this is where the code generated by sqoop is used.
// it automatically casts one line of the imported data into an instance of the generated class,
// to let you access the data inside the columns easily
s.parse(v);
}
catch(ParseError pe) {// do something if there is an error.}
try
{
// now the imported data is accessible:
// e.g
if (s.age>30)
{
// submit the selected data to the mapper's output as a key value pair.
context.write(new Text(s.age),new Text(s.id));
}
}
catch(Exception ex)
{//do something about the error}
}
}