RapidSQL (Embacadero) error db2abind.dll is missing - db2

I just installed rapidSQL 8.0.1 and tried connecting to a valid database. I'm fairly certain I have the right connection data (it was imported from another developer), but I'm getting the following error:
db2abind.dll Cannot be loaded! That will severely impact use of this application. Please restore the missing library.
I have created a ticket with Embarcadero, but I was wondering if anyone else has had this problem and have a solution.

According to the publib, db2abind.dll's functionality has been moved into db2app.dll since version 9 for Linux, Unix, Windows. It also mentions that at that time, stub DLLs were provided for convenience sake, but would be removed in a future version.
Since LUW is now on version 9.7, perhaps this removal has taken place.
Application libraries have changed
Operating systems affected
All supported operating systems are affected.
Change
The following changes have been made:
db2app.dll was extended. It includes its original information, plus
the information from the db2util.dll, db2abind.dll, and db2cli.dll
libraries. db2api.dll was extended. It includes its original
information, plus the information from the db2cli.dll library.
Explanation
The library information is being consolidated.
Resolution
Stubs for the db2util.dll, db2abind.dll, and db2cli.dll
libraries are still available for backwards compatibility. These stubs
will be removed in a future version or release of the product. You
should rebuild your application using the changed libraries.

So this was being caused by the fact that I didn't have a DB2 client installed on my machine. I chose a light db2 client from among the many(!) available at IBM, and it got me past this issue.
http://www.db2dean.com/Previous/DB2Client.html
The above link was a good resource on understanding what was going on with IBM clients and DB2 connectivity.

Related

Enable JDBC in PostgreSQL for non programmer

I am looking for someone that can explain me in simple terms with written instructions how to create a JDBC in PostgreSQL (I am losing my mind with this). I found other answers in this page and others but I couldn't follow them.
I am no programmer, so I didn't undertand any of the instructions of how to do it in webpages and forums -the method mentioned was configuring the classpath environment variable in the command prompt but I got stuck in the command prompt, I think I have to configure the Java console or something.
I am learning some data mining and I wish to connect to some databases in order to practice. I suppose that for someone knowledgeable in this area this should be an easy job.
I prefer to install a driver in postgresql and not using a bridge.
Thanks a lot!
The phrase “how to create a JDBC” makes no sense.
You need to learn some basics first. Be clear on what JDBC is (a standard for connecting or mediating between a database and a Java app), what a JDBC driver is (a particular implementation of JDBC for a specific database.
There are four types of JDBC drivers, the Type 4 (pure Java) being most common in my experience.
For any particular database, you may find there are zero, one or more drivers implemented and available. Some are free-of-cost and open-source, some are not. For example, in Postgres there are two open source drivers, the classic one and a newer rewrite-from-scratch one, as well as some commercial products.
A JDBC driver is only useful when trying to connect a Java app to your database. That may be your own app you are writing, or a finished app you obtained such as a database-administration tool.
You must have a Java implementation installed on your computer, such as one from Oracle or from the OpenJDK project, or from another vendor such as Azul (Zing & Zulu).
You need to learn about the Java Classpath, the list of all the folders where the JVM will be looking for Java classes and JAR files. Read the Oracle Tutorial. The easiest way to go is to drop your JDBC driver JAR into an already existing folder on the Classpath, so you do not need to twiddle with setting the Classpath. For example, on a Mac you could drop your driver into /Library/Java/Extensions.
The JDBC driver sits between the database engine and the Java app. You do not install the JDBC driver into the database engine, such as your Questions mentioned, “install a driver in postgresql”.
[Postgres] ↔ [JDBC driver] ↔ [JVM] ↔ [Java app]

Domino 8.5.3 - Create an organization extension library / codestore

This is a project I've been working on off and on for months and I feel like I'm pretty close, but I just can't seem to get past the final hurdle.
The goal is to develop an organization extension library that contains both internal and 3rd party code that we frequently rely on.
History
As a test project, I started with Apache Poi because that is already in wide use in our environment. I have a plug-in and feature built just from the Poi .jars that allows me to build our current Poi applications as long as I add the plug-in (from my workspace) to my build path. The apps work on the servers because we have already distributed the Poi .jars by manually copying them.
The next step is taking that plug-in and getting it into an updatesite so that all of the servers and developers can synchronize on one version. I found and followed these two excellent blog articles (that I wish existed when I started this project):
http://www.dalsgaard-data.eu/blog/wrap-an-existing-jar-file-into-a-plug-in/
http://www.dalsgaard-data.eu/blog/deploy-an-eclipse-update-site-to-ibm-domino-and-ibm-domino-designer/
With the caveat that the articles are written for Domino 9 and we are running 8.5.3 here, but that only matters in the last (installation) step.
Current
This brings us to the problem. All of the above seems to have worked great up to a point. I can install my feature to my designer client from the eclipse update site and it works great. However, the install is failing when I import that into our updatesite.nsf database. This means that while the developers can all install from the updatesite if I put it on a network drive, that doesn't deploy updates to our servers.
The problem is that when I try to install from the .nsf update site, the Eclipse Updater just hangs. I've let it go for well over an hour and eventually Notes becomes completely unresponsive.
So the question is, is there anything I might have done wrong, either in the development of the plug-in or server configuration that might be causing this issue?
Additional Info
I'm looking at the osgi console and that is largely unhelpful. I am getting the following errors as I'm trying to install: SEVERE Could not access digest on the site: no protocol: 0/5B004DDD5E38F3FF85257CAF004C72C7/$file/digest.zip ::class.method=unknown ::thread=Worker-7 ::loggername=org.eclipse.update.core
I could generate dumps if that would be useful.
Security is also locked down fairly tight here. It could be a security issue - is there a way to troubleshoot that? Once I get to the hang I'm just stuck guessing.
This has been edited for clarity and to update information
I know that this is post is over 5 years ago but...
for those that find this and are trying to resolve the error
SEVERE Could not access digest on the site: no protocol: "
is due to the update site project not having the URL of the Domino updatesite.nsf not being added to the Archives tab of the site.xml.
I found the updatesite.nsf also needs to be anonymously accessible as no credentials are prompted/passed through to the Domino server hosting the updatesite.nsf database (at least from DDE), YMMV from eclipse. So if Anonymous connections are blocked on the Domino server you will be out of luck.
To develop a plug-in you really want to have 3 projects:
the plug-in
the feature
the update site
Of course a feature can contain more than one plug-in (and probably should) and a update site can contain more than one feature (and probably should). Once you have an update site project it features a handy button "build all" that makes sure plug-in, feature and update-site get compiled in one go. And that button is what you really want.
You can point using a setting in your Domino Designer (or local Domino server) to the feature directory. Add a plain text .link file to framework/rcp/eclipse/links, that contains the path to your install site - it then picks up the features and plug-ins from there. After a build you would need to restart designer/server to activate the updated feature.
For the Domino server the approach using an updatesite.nsf and the respective notes.ini setting makes the most sense (to me). http restart required. Lazy people script the whole thing.
I still don't have a great answer for this, but I believe the issue is related to the environment here. I don't have the authority to change the environment, even if I were able to conclusively demonstrate it is the cause of this problem, so it is a moot point. All I can say is that at least one administrator computer had no issue installing from the update site.
For me, the solution for distributing the update site is to put it on a network drive and have everyone install it from there. The server has no problem using it from the updatesite.nsf.

Are there any known issues with SpringSource-TC-Server and Java7?

We are using SpringSource-TC-Server and we are considering upgrading to java7. (Currently using java6).
We have not seen any reports on SpringSource-TC-Server not working well with java7 but we do not know of any name worthy projects that have migrated to such an environment.
I'm looking for answer(s) about the following:
Are there any known issues?
Are there any projects who migrated and can report on how it went?
Java 7 is officially supported since vFabric tc Server 2.7.0:
http://www.vmware.com/support/vfabric-tcserver/doc/vfabric-tcserver-rn-2.7.0.html#whatsnew
Since you're using tc Server instead of plain Tomcat probably due to commercial support, it's reasonable only to migrate the underlying Java JDK to the latest version when it is officially supported by the employed version of tc Server. Otherwise, you'd be running it in an unsupported configuration, which isn't far from running a plain unsupported open source version of Tomcat.
Operating tc Server on Java 7 in an officially supported arrangement of versions gives you 2 advantages:
It would have been thoroughly tested by vmWare for any incompatibilities so that you wouldn't have to deal with testing by yourself.
If any problems do occur, you can always get support from vmWare in resolving them.
I know it doesn't directly address your questions, as we in my company also haven't upgraded yet and are only planning to do so.
I just had an impression that your approach makes no sense for a commercially supported product and wanted to outline the reasonable (IMO) approach that is in wide use.
As to any known issues, Java 7 is known for its backward incompatible changes to the XML stack, especially the migration to JAXB 2.2 which changes handling of java.lang.Boolean objects (see the other question - What are the pitfalls when upgrading to Java 7). This can spring up in many different places, I've seen it cause problems in Apache CXF's cxf-codegen-plugin that generates Java stubs from WSDL since the wsdl2java tool it launches makes use of JAXB - the generated method names for boolean elements were no longer in the form of java.lang.Boolean isSomeBooleanProperty() but in the form of java.lang.Boolean getSomeBooleanProperty() which broke code depending on those stubs.
So perform thorough testing if you deal with SOAP web services or XML in general.

Oracle access from iOS

I'm developing an iPad app that needs read-only access to an Oracle database.
Is there any way to do this? As far as I can see, the only options are using OCI, which requires a prebuilt binary in the form of the instant client (and not built for ARM), or OJDBC drivers. Both of these seem to be out of the question.
In my research I have discovered that libmysqlclient compiles for arm with minimal tuning. This is a stretch, but is there any possible way to use this to my advantage?
I have seen this product providing odbc connectivity through the use of a Windows gateway machine using the ODBC client libraries, but this solution really isn't an option for me at the present time.
Any ideas?
At the very bottom, there are only two libraries for accessing Oracle:
The OCI binary library.
The Java OJDBC Jar file.
All other libraries (such as ODBC, ADO.NET) build upon one of these libraries (usually on OCI).
There's no OCI library for the iPhone (or any ARM architecture as far as I know) and there's no Java VM to use OJDBC. So you cannot directly connect from the iPhone to an Oracle database.
So whatever your solution will be, it'll require an intermediate server (or gateway).
While I did end up using an intermediary server... I have since realized that this isn't strictly necessary. Direct access should be obtainable by using the OJDBC drivers directly on iOS using gcj to compile them for ARM. Since Objective-C is a superset of C, you could use JNI for communication to and from. Hope this helps anyone who comes here :)
Direct access to an Oracle database from iOS is not possible as of this moment. Exchanging data with an Oracle database by means of web services is fairly simple. You can use APEX for this, lean and mean.

How to version control the build tools and libraries?

What are the recommendations for including your compiler, libraries, and other tools in your source control system itself?
In the past, I've run into issues where, although we had all the source code, building an old version of the product was an exercise in scurrying around trying to get the exact correct configuration of Visual Studio, InstallShield and other tools (including the correct patch version) used to build the product. On my next project, I'd like to avoid this by checking these build tools into source control, and then build using them. This would also simplify things in terms of setting up a new build machine -- 1) install our source control tool, 2) point at the right branch, and 3) build -- that's it.
Options I've considered include:
Copying the install CD ISO to source control - although this provides the backup we need if we have to go back to an older version, it isn't a good option for "live" use (each build would need to start with an install step, which could easily turn a 1 hour build into 3 hours).
Installing the software to source control. ClearCase maps your branch to a drive letter; we could install the software under this drive. This doesn't take into account non-file part of installing your tools, like registry settings.
Installing all the software and setting up the build process inside a virtual machine, storing the virtual machine in source control, and figuring out how to get the VM to do a build on boot. While we capture the state of the "build machine" with ease, we get the overhead of a VM, and it doesn't help with the "make the same tools available to developers issue."
It seems such a basic idea of configuration management, but I've been unable to track down any resources for how to do this. What are the suggestions?
I think the VM is your best solution. We always used dedicated build machines to get consistency. In the old COM DLL Hell days, there were dependencies on (COMCAT.DLL, anyone) on non-development software installed (Office). Your first two options don't solve anything that has shared COM components. If you don't have any shared components issue, maybe they will work.
There is no reason the developers couldn't take a copy of the same VM to be able to debug in a clean environment. Your issues would be more complex if there are a lot of physical layers in your architecture, like mail server, database server, etc.
This is something that is very specific to your environment. That's why you won't see a guide to handle all situations. All the different shops I've worked for have handled this differently. I can only give you my opinion on what I think has worked best for me.
Put everything needed to build the
application on a new workstation
under source control.
Keep large
applications out of source control,
stuff like IDEs, SDKs, and database
engines. Keep these in a directory as ISO files.
Maintain a text document, with the source code, that has a list of the ISO files that will be needed to build the app.
I would definitely consider the legal/licensing issues surrounding the idea. Would it be permissible according to the various licenses of your toolchain?
Have you considered ghosting a fresh development machine that is able to build the release, if you don't like the idea of a VM image? Of course, keeping that ghosted image running as hardware changes might be more trouble than it's worth...
Just a note on the versionning of libraries in your version control system:
it is a good solution but it implies packaging (i.e. reducing the number of files of that library to a minimum)
it does not solves the 'configuration aspect' (that is "what specific set of libraries does my '3.2' projects need ?").
Do not forget that set will evolves with each new version of your project. UCM and its 'composite baseline' might give the beginning of an answer for that.
The packaging aspect (minimum number of files) is important because:
you do not want to access your libraries through the network (like though dynamic view), because the compilation times are much longer than when you use local accessed library files.
you do want to get those library on your disk, meaning snapshot view, meaning downloading those files... and this is where you might appreciate the packaging of your libraries: the less files you have to download, the better you are ;)
My organisation has a "read-only" filesystem, where everything is put into releases and versions. Releaselinks (essentially symlinks) point to the version being used by your project. When a new version comes along it is just added to the filesystem and you can swing your symlink to it. There is full audit history of the symlinks, and you can create new symlinks for different versions.
This approach works great on Linux, but it doesn't work so well for Windows apps that tend to like to use things local to the machine such as the registry to store things like configuration.
Are you using a continuous integration (CI) tool like NAnt to do your builds?
As a .Net example, you can specify specific frameworks for each build.
Perhaps the popular CI tool for whatever you're developing in has options that will allow you to avoid storing several IDEs in your version control system.
In many cases, you can force your build to use compilers and libraries checked into your source control rather than relying on global machine settings that won't be repeatable in the future. For example, with the C# compiler, you can use the /nostdlib switch and manually /reference all libraries to point to versions checked in to source control. And of course check the compilers themselves into source control as well.
Following up on my own question, I came across this posting referenced in the answer to another question. Although more of a discussion of the issue than an aswer, it does mention the VM idea.
As for "figuring out how to build on boot": I've developed using a build farm system custom-created very quickly by one sysadmin and one developer. Build slaves query a taskmaster for suitable queued build requests. It's pretty nice.
A request is 'suitable' for a slave if its toolchain requirements match the toolchain versions on the slave - including what OS, since the product is multi-platform and a build can include automated tests. Normally this is "the current state of the art", but doesn't have to be.
When a slave is ready to build, it just starts polling the taskmaster, telling it what it's got installed. It doesn't have to know in advance what it's expected to build. It fetches a build request, which tells it to check certain tags out of SVN, then run a script from one of those tags to take it from there. Developers don't have to know how many build slaves are available, what they're called, or whether they're busy, just how to add a request to the build queue. The build queue itself is a fairly simple web app. All very modular.
Slaves needn't be VMs, but usually are. The number of slaves (and the physical machines they're running on) can be scaled to satisfy demand. Slaves can obviously be added to the system any time, or nuked if the toolchain crashes. That'ss actually the main point of this scheme, rather than your problem with archiving the state of the toolchain, but I think it's applicable.
Depending how often you need an old toolchain, you might want the build queue to be capable of starting VMs as needed, since otherwise someone who wants to recreate an old build has to also arrange for a suitable slave to appear. Not that this is necessarily difficult - it might just be a question of starting the right VM on a machine of their choosing.