How to access the database imported through datapump - oracle10g

I just imported data dump through below command:
IMPDP user/pass FULL=Y DUMPFILE=BIRDV24012014.DMP LOGFILE=BIRDV24012014.log;
The dump has been restored the issue is i dont know how to connect to this database that i just imported, what service or TNS does it resides and how can i query it?

You didn't import a database, you imported the contents of your file into your existing database. If you could successfully run impdp user/pass then your ORACLE_SID etc. is already set and you should be able to log in and query with sqlplus user/pass.
If you've come from another RDBMS background you may be confusing 'database' with 'schema'. Depending on what was in the dump, you've probably created a load of schema objects and data under the USER schema or whatever your real 'user' value was).
The import makes no difference to this, but if you want to access the database from another client (e.g. from another machine, or over JDBC) then you'll need to check your listener configuration to get the hostname/IP address and port it's listening on, and get the service name for the database; all of which can be obtained from lsnrctl services if you have permission to run that. You can then use those values for a JDBC URL, or in a tnsnames.ora entry, or ODBC, etc.

Look at your ORACLE_SID environment variable. There you'll find the instance ID. If you ran the IMPDP tool as user Oracle, you should also be able to connect to the database using
sqlplus / as sysdba
If all fails, look at your /etc/oratab file to see which instances are available on this host.
On another note, your command seems incomplete. Datapump requires a DIRECTORYparameter to know where to look for the dumpfile you specified.

Related

Apache Airflow Init Db

I am trying to initialize a database for my project which is based on using apache airflow. I am not too familiar with what happened but I changed my value from airflow.cfg file to sql_alchemy_conn =postgresql+psycopg2:////Users/gabeuy/airflow/airflow.db. Then when I saved the changes and ran the command airflow db init, the error occurred and did not allow me to run the db.
I tried looking up different ways to change it, ensured that I had Postgres and psycopg installed but it still resulted in an error when I ran the command. I was expecting it to run so that I could access the airflow db local host with the DAGs. error occured
Your sql_alchemy_conn is pointing to a local file path (indicating a SQLite DB), but the protocol is indicating a PostgreSQL DB. The error is telling you it's missing a password, which is required by PostgreSQL.
For PostgreSQL, the expected URL format is:
postgresql+psycopg2://<user>:<password>#<host>/<db>
And for a SQLite DB, the expected URL format is:
sqlite:////<path/to/airflow.db>
A SQLite DB is convenient for testing purposes. A SQLite DB is stored as a single file on your computer which makes it easy to set up (airflow db init will generate the file if it doesn't exist). A PostgreSQL DB takes a bit more work to set up, but is generally advised for a production scenario.
For more information about Airflow database configuration, see: https://airflow.apache.org/docs/apache-airflow/stable/howto/set-up-database.html.
And for more information about airflow db CLI commands, see: https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html#db.

How to import sql file in Google SQL with binary mode enabled?

I have a database that is giving error:
ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled and mysql is run in non-interactive mode. Set --binary-mode to 1 if ASCII '\0' is expected.
I'm including importing the database through the console with gcloud sql import sql mydb gs://my-path/mydb.sql --database=mydb but I don't see in the documentation any flags for binary mode. Is it possible at all?
Optional - is there a way to set this flag when importing through the MySQL Workbench. I haven't seen anything about it there too, but may be I'm missing some setting or something. If there is way to set that flag, then I can import my database through MySQL Workbench.
Thank you.
Depending where the source database is hosted, on Cloud SQL or on an on-premise environment, the proper flags are set during the export, so the dump file is compatible with the target database.
Since you would like to import a file that has been exported from an on-premise environment, mysqldump is the suggested way to perform the export.
First, create a dump file as suggested in the documentation. Make sure to pay attention to the following 2 points:
Do not export customer-created MySQL users. This will cause the import to the new instance to fail. Instead, manually create the MySQL users you wish to.
Make sure that you have configured the appropriate flags in order to make sure that the dump file will contain all the necessary details you need. Eg triggers, stored procedures etc.
Then, create a Cloud Storage Bucket and upload the dump file to the bucket.
Before proceeding with the import, grant the Storage Object Admin role to the service account of the target Cloud SQL instance. You may do that with the following command:
gsutil iam ch serviceAccount:[SERVICE-ACCOUNT]:objectAdmin gs://[BUCKET-NAME]
You may locate the aforementioned Service Account in the Cloud SQL instance Overview, or by running the following command:
gcloud sql instances describe [INSTANCE_NAME]
The service account will be mentioned at the serviceAccountEmailAddress field.
Now you are able to do the import either from Console, or using the gcloud command or a REST API.
More details in Google documentation
Best Practices for importing/exporting data

Where is the DATA_DUMP_DIR in sql developer

I'm trying to import a .dmp file using the Data Pump Import tool in oracle sql developer.
I'm connected to an oracle database running in a container on my local machine.
When I get to the step where I specify where the dump file is to import, where should I place the .dmp file?
DATA_PUMP_DIR is a default Oracle directory object. It isn't part of SQL Developer; the import tool is really just giving you a GUI equivalent of running impdp from the command line.
You can find the operating system location that Oracle directory object points to by querying the data dictionary:
select directory_path from all_directories where directory_name = 'DATA_PUMP_DIR';
The path that returns is on the database server (in your case that'll be inside your container too), and your dump file needs to go there.
You might want to create additional directory objects pointing to other locations, and grant suitable privileges to users to be able to access them; but they all need to be on the DB server and read/writable by the Oracle process owner on that server.
(They could be remote filesystems mounted on the server, they don't necessarily have to be local storage, but that's another issue and more operating-system specific. Again, in your case, you might be able to share a folder on your local machine with the container, if you don't want to copy the file into the container.)

Create PostgreSQL DB On Connection

I want to reduce the number of requirements to get started with my webapp. At the moment you need to run a "create database, create user, grant all" script before you can start debugging.
I'd like the code to be checked out and run straight away without requiring developers to have to read through lots of documentation and do lots of manual steps.
h2 allows you to specify a connection string and it will create the db if it doesn't already exist.
Is it possible to do that using PostgreSQL?
Or is my only option (to meet the requirements) to configure h2 for dev work and PostgreSQL for production?
A connection in Postgres is always to a particular database, but by default every install will have a postgres DB intended for running maintenance commands. The user will still need to supply some superuser login credentials, but assuming you have those, you can run your "create database, create user, grant all" script automatically when the webapp is first accessed.
For instance, have a generated config file which is ignored in source control; before loading the file, check if it exists; if it doesn't, run the install routine.
You can even load an HTML form allowing the user to provide the superuser credentials, choose a name for the DB, and any other commonly-changed configuration options. If these are all defaulted, the "manual step" is simply to glance that they are correct, and click "OK".

Informix dbserver connections in sqlhosts via perl

I want to add a new Informix sever entry into sqlhosts, but I'm not quite sure how it will impact the existing connection.
Currently sqlhosts contains only one server entry...
dbserver onsoctcp 111.111.111.20 7101
The database handle is created within an existing perl module (db is a database on the server)...
my $dsn = "DBI:Informix:db";
my $dbh = DBI->connect($dsn,"user","password");
Notice that "dbserver" is never referenced.
I want to add a test server to sqlhosts. Something like this...
dbserver onsoctcp 111.111.111.20 7101
dbserver_test onsoctcp 111.111.111.21 7101
With only one entry in sqlhosts, everything has been working fine. But my connection never references the server name in sqlhosts.
So, my question(s)...
Does Informix just try to use the only one available?
Will adding a second server entry in sqlhosts force me to include the server name in the connection string?
Thanks!
Informix client uses environment variables to resolve hosts and other configuration; check that INFORMIXDIR is set to the path where Informix CSDK is installed (I assume it is), and set INFORMIXSERVER to point to the new entry in sqlhosts. See this article in IBM knowledge base.
Alternatively, use db#server data source format:
my $dbh = DBI->connect("DBI:Informix:db#server", "user", "password");
Maybe it is a permissions issue? From the documentation:
Note that you might also be able to connect to other databases not
listed by DBI->data_sources using other notations to identify the
database. For example, you can connect to "dbase#server" if "server"
appears in the sqlhosts file and the database "dbase" exists on the
server and the server is up and you have permission to use both the
server and the database on the server and so on. Also, you might not
be able to connect to every one of the databases listed if you have
not been given at least connect permission on the database. However,
the databases listed by the DBI->data_sources method certainly exist,
and it is legitimate to try connecting to those sources.
http://search.cpan.org/~johnl/DBD-Informix-2013.0521/Informix.pm