Weblogic admin startup issue - weblogic12c

Referring the oracle document i bind the server processes to a user ID (none).
"When Node Manager runs as an init.d service, the launched Managed Servers are owned by the root user. To start Managed Servers as non-root user, first use the Administration Console to enable the Post-Bind UID and Post-Bind GID attributes on the
Domain > Environment > Machines > Configuration > General page
Then, restart Node Manager and the Administration Server before restarting the Managed Servers."
In the Administration Console, in the left pane, click on the Machines folder.
In the right pane, selected the Configure a New Unix Machine link.
Enabled Post-Bind UID and Post-Bind GID attributes
As per suggestion saved the settings and restated admin server. Upon restart I am getting below error in the admin.out.
<Jul 9, 2016 6:16:29 AM UTC> <Critical> <WebLogicServer> <BEA-000252>
<Cannot switch to the group "nobody".
java.lang.IllegalArgumentException: setegid: no such group: 'nobody'
java.lang.IllegalArgumentException: setegid: no such group: 'nobody'
at weblogic.platform.Unix.setEGroup0(Native Method)
at weblogic.platform.Unix.setEffectiveGroup(Unix.java:73)
at weblogic.t3.srvr.SetUIDRendezvous.setEGroup(SetUIDRendezvous.java:159)
at weblogic.t3.srvr.SetUIDRendezvous.makeUnPrivileged(SetUIDRendezvous.java:186)
at weblogic.t3.srvr.SetUIDRendezvous.initialize(SetUIDRendezvous.java:87)
at weblogic.t3.srvr.BootService.start(BootService.java:75)

Hope your configuration file got corrupted. Please follow the below steps:
Take a backup of the config.xml file.
If your adminserver not starting then you could open config.xml file in the $DOMAIN_HOME/config. Search for 'nobody' word then update UID, GID as per your OS level user and group details.
Start your Admin server.
Update your end what happend.
HTH

Related

DB2 (version:11.5.0.1077) Command LIST ACTIVE DATABASES is not working

I downloaded trail version of DB2 (by following steps here:https://www.db2tutorial.com... some steps didn't come when i installed)and i opened Administrator:DB2 CLP - DB2COPY1 -db2, then i have given the cmd as
db2 => LIST ACTIVE DATABASES
output(error):
SQL1096N The command is not valid for this node type.
db2 =>
it's not working, here version of DB2 is:11.5.0.1077,Please advice how to proceed further.In fact not only this command, so many commands are not working.
The message:
"SQL1096N The command is not valid for this node type."
as a response to "list active databases" command
usually means that you did not install a Db2-server product, or you have multiple Db2-products installed and you chose to act on a client instance instead of a server instance.
You might have installed a full Db2-client product by accident, by clicking the wrong button on the setup program gui page for Install a product, or the installable image that you downloaded was not a server image (but instead a client image).
Different ways to see the node type:
You can see which Db2 product(s) you installed if you run
appwiz.cpl from the Windows Start menu and examine the list.
You can also see what you installed if you examine the log file created by the setup program during installation.
open a db2cmd.exe window (from Windows Start > Run) and in there run the command db2 get dbm cfg | more and near the start you will see Node type = ..... If your db2cmd.exe window is addressing a server installation, then the node type will be something like ...Server edition with local and remote clients. If you see Node type = Client, then you are not addressing a server installation. So in this case, you can either uninstall the client image and install a server image, or configure your db2cmd.exe window to address a server installation.
If you have multiple Db2 products installed then run the Default DB2 and Database Client Interface Selection wizard which should appear in your Windows Start menu under the IBM Db2 group. This lets you choose which instance to make the default, so that when you run a db2cwadmin/db2cmd window, then the correct product gets addressed.
If you installed a Db2-server product, then you can run db2cwadmin.bat (from the Windows Start Menu) and in that window the commands db2start and db2stop will be available, and the command-line db2 list active databases will report (by default) one local database called SAMPLE assuming you created the default database in the First Steps that runs after installation.
If you installed a Db2-server product, with all the default settings on a Microsoft Windows operating-system, then you will also see a process called db2sysc.exe in the Task List when the Db2-instance is running.
Verify that you downloaded the server image from IBM, and then re-run the setup program and ensure you choose to install a server product.

PostgreSQL in Openshift won't execute the entrypoint and can not start the database

We have a read-only Postgresql database that should run in Openshift cluster.
We are using RHEL as the undrlying operating system.
Our Dockerfile will install postgres software, create the database instance, loads the data to it than shuts the database down and save the image.
We are using only bash and sql scripts and deploy the database using flyway.
When starting the container the entrypoint script will simply startup the database instance using "pg_ctl" command then perform an endless loop to keep the container running.
The Dockerfile has as the last command USER 26, where 26 is the id of the postgres user. The entrypoint script can be started as the postgres user or by a sudo user.
Everything is working well in Docker.
In Openshift the container is started by a different user belonging to the root group, but not the root user nor the user 26. Actually Openshift ignores the USER 26 clause in the Dockerfile.
The user starting the container (we'll call it containeruser) has anyhow no rights to start the postgres instance , so when running the entrypoint it will get permission denied on the postgresql data folder.
I have tried different options, adding the containeruser user to the wheel group and modify the sudoers file to allow it using sudo and start the entrypoint as postgres user but with no success.
So I have my database image ready but can not start it in Openshift.
On the openshift configuration side we are not allowed to make changes like allowing sudo usage, or starting the container as root or postgres user.
Any idea or help to this problem?
I am not an Openshift expert.
Thank you!
Best regards,
rimetnac
You have two choices.
The preferred choice is to fix your image so that it can run as any user. For this, do not use the existing postgres user. Create a new user, where that user has group root. Then ensure that all directories/files that PostgreSQL needs to write to are owned by that user, but also have group root and writable by group. When the container is then started up, it will run as an assigned user ID, not in /etc/passwd, and so will fallback to using group root still. Because the directories/file are writable to group root, everything will still work. For more information see:
https://docs.openshift.org/latest/creating_images/guidelines.html
Specifically, section 'Support Arbitrary User IDs'.
The second option if you have admin control of the cluster, and your security team do not object that you are overriding the default security model, is to allow your image to be run as user ID it wants to.
First create a new service account:
oc create serviceaccount runasnonroot
Next grant that service account the ability to run as non root user ID of its choosing.
oc adm policy add-scc-to-user nonroot -z runasnonroot --as system:admin
Then patch the deployment config to use that service account.
oc patch dc/mydatabase --patch '{"spec":{"template":{"spec":{"serviceAccountName": "runasnonroot"}}}}'
Note that this still requires you use USER in the image with an integer user ID and not postgres. Otherwise it can't verify it will run as non root user. That is because if you use a user name instead of user ID, you could be maliciously mapping that to root.
I spent days figuring this out and found one good solution.
OpenShift Origin runs an image as a user created by it, as explained in this OpenShift blog post. This prevents programs from being able to access needed files and directories. To successfully run a program on OpenShift Origin, the blog post provides two solutions, however, the first will not work for PostgreSQL and the second has two disadvantages (explained in the notes):
Grant group write access to the directories used by the main program.
This will not solve the problem because, although the PostgreSQL files will be accessible by any program, they must be owned by the owner of the PostgreSQL process.
Ensure that when operating system libraries are used to look up a system user, one is returned for the ID of the user OpenShift Origin runs the image as. The following are two methods for doing this:
Use a package called nss_wrapper, "which intercepts any calls which look up details of a user and returns a valid entry."
Make the UNIX password database file (/etc/passwd) have global write permissions in the image build so that the OpenShift user can be added to it in the S2I run script.
Each option has a disadvantage: 1. install an extra package and 2. make user accounts insecure.
The best solution is to build the docker image to run as the user OpenShift Origin will run the image as. I built this instructional image with it.
One additional problem to note is that, as the owner of the PostgreSQL process must be the owner of the files and directories accessed by PostgreSQL, PostgreSQL must be set up (i.e. initdb, roles, databases, etc.) during the image build. This is because file ownership can only be changed during the image build and the ownership of the files must be changed after PostgreSQL has been set up for the reason explained in #2 below.
Here are the complete steps with notes for setting up PostgreSQL in the image build:
Manually create the PostgreSQL data directory and change its ownership to a non-root user that will be used to initialze PostgreSQL and set up the components (e.g. roles and databases) required to run the server on OpenShift Origin.
This is required because the "initdb" executable must be executed by a user other than root and will need access to the data directory. Additionally, this user cannot be the user OpenShift Origin will run the image as because it is not in the system.
Switch to the non-root user.
This is required because the initdb executable must be executed "as the user that will own the server process, because the server needs to have access to the files and directories that initdb creates" (PostgreSQL documentation) and because the PostgreSQL server will be started to set up components (e.g. roles and databases) required to run the server on OpenShift Origin.
Run the "initdb" executable.
Start the PostgreSQL server, set up the required components (roles, databases, etc.) and stop the PostgreSQL server.
Switch back to the root user.
Change the ownership of the PostgreSQL files and directories to the user OpenShift Origin will run the image as.
Edit (06/20/18): I have found that there is a solution to set up PostgreSQL after the image is built. The user OpenShift Origin will run the image as can be added to the system at the start of the build. This will allow PostgreSQL to be set up and the ownership of its files and directories to be changed after the image build.
After gathering the comments from all contributors I can asnwer my question as follows:
Option 1
When you create the postgres database during image build, you must configure openshift policies to allow starting your container as the user that created the database during image build. Use this option when the database must be filled with data and this operation takes much time making it inappropriate for a container start. the entrypoint will only start the already prepared database.
Option 2
Create your database when starting the container using the entrypoint script. Use this option when the database creation is fast enough to be done at container start.
Option 3
See the last comment from Adrian which seems to answer all the problems anyhow I didn't got the time to test it.
Thank you all for your contributions.

installation with WAS Liberty server and DB2(How to give root access to user )

I am trying to set up a standalone server setup with WAS Liberty server and DB2 database. I have installed IBM Installation Manager, WAS server and WAS supplementary and DB2.THe whole set up is done in ubuntu. I am new to DB2 Express c edition , so when i run the command for creating a database it is giving me a error saying as below. So i need to know the command for getting the root access to the user , seeing the number of the user and also need help in creating the document.
Error message is SQL1092N The requested command or operation failed because the user ID does
not have the authority to perform the requested command or operation. User
ID: "****".

Log shipping setup

Log shipping fails with the below error. Sql agent on secondary server has access to the folder and files in security. It is not a firewall issue. I created the jobs using the LS scripts as it fails through GUI.
I have done it before on a different server where there were several LS databases. This is a new primary and secondary server and not sure what I am missing. Thanks for the help
* Error: Access to the path '\sqlp\R$\MSSQL10_50.MSSQLSERVER\MSSQL\Backup\Database' is denied.(mscorlib) *
----- END OF TRANSACTION LOG COPY
I'd to set the agent account and sql account of primary server as admin on the secondary server and it worked fine

How can I run PostgreSQL DB service under Local System account?

I'm using PostgreSQL DB in my application, I used to create special windows user account to run the DB service.
Now I need to run PostgreSQL service under Local System account! Is there a configuration in PostgreSQL to specify the user account which the service runs under?
Thanks,
Computer Management -> Services -> select the Postgres server service -> right click -> Properties -> check the Log on tab, just like any other Windows service.
(Might be a bit off with the naming, I don't have access to a Windows machine at home, but I'm sure you can improvise.)
Download the zip archived binaries from PostgreSQL site. Extract the contents. Create a folder, say 'data', for storing the db files etc. Open Windows Advanced System Configuration -> Environment Variables.
Create a new System Variable called PGDATA and set its value to the complete path of data folder, as in - 'C:\PostgreSQL\data'(without the single quotes).
Next in Path variable include the complete path to bin folder, as in - 'C:\PostgreSQL\bin;....' .
Once done, do the following...
Open command prompt(as Admin in Vista and Win 7 systems). Type: 'initdb' and press Enter. This will do the initial setup.
In the same command prompt window, type in: 'pg_ctl register -N PostgreSQL -U LocalSystem' and press enter.
Open run and type services.msc and check if the process 'PostgreSQL' has been created and that you can actually start it.
I don't believe that you need to do anything special to run this under a local system account. We use PostgreSQL databases here and we install them on local system accounts all the time. I have never seen a configuration setting in the installation that allows you to indicate a system account for the service.