Websphere ear deploment is failed - deployment

I am trying to deploy ear file and there is error comes saying
"A composition unit with name ace-ear already exists. Select a
different application name"
which is not there . what else can be the problem ?

1.Check the following locations to see if the application directories exist. If they do exist delete the application folder 'your_app'
<profile root>/config/cells/cellname/applications/your_app
<profile root>/config/cells/cellname/blas/your_app
<profile root>/config/cells/cellname/cus/your_app
2.Clear the contents of the profile/wstemp directory
3.Clear the contents of the profile/temp director
4.Restart the Application Server.

Though not recommended, you can manually remove the references of the ear from the server configuration files.
To check if the ear exists inside the profile, run the below command from Dmgr/config folder. Delete the ear files manually, if present.
find . -name '*ace-ear*'
To check if there are ear references in the configuration xmls, run the below command from Dmgr/config folder, and then remove those entries from the xml files manually, if present.
find . -name '*.xml' | xargs grep -i ace-ear
Post this, Restart the Deployment manager, sync nodes and restart the JVMs and try deploying the application.
NOTE : Be very careful updating the server configuration files manually, as any mistakes can corrupt the server configurations. Taking profile backups before applying any changes to server configuration files is recommended.

My problem was on remote environment, with ftp filezilla client searched for all occurrences of my application name inside appserver folders (Server --> Find remote files), then deleted all folder and files with the name of the application, restarted server, deploy again the application, succeeded

Related

Firebase hosting: The remote web server hosts what may be a publicly accessible .bash_history file

We host our website on firebase. We fail a security check due to the following reason:
The remote web server hosts publicly available files whose contents may be indicative of a typical bash history. Such files may contain sensitive information that should not be disclosed to the public.
The following .bash_history files are available on the remote server : - /.bash_history Note, this file is being flagged because you have set your scan to 'Paranoid'. The contents of the detected file has not been inspected to see if it contains any of the common Linux commands one might expect to see in a typical .bash_history file. - /cgi-bin/.bash_history Note, this file is being flagged because you have set your scan to 'Paranoid'. The contents of the detected file has not been inspected to see if it contains any of the common Linux commands one might expect to see in a typical .bash_history file. - /scripts/.bash_history Note, this file is being flagged because you have set your scan to 'Paranoid'. The contents of the detected file has not been inspected to see if it contains any of the common Linux commands one might expect to see in a typical .bash_history file.
The problem is that we don't have an easy way to get access to the hosting machine and delete these files.
Anybody knows how it can be solved?
If you are using Firebase Hosting, you should check the directory (usually public) that you are uploading via the firebase deploy command. Hosting serves only those files (plus a couple of auto-generated ones under the reserved __/ path for auto-configuration).
If you have a .bash_history, cgi-bin/.bash_history or scripts/.bash_history in that public directory, then it will be uploaded to and served by Hosting. There are no automatically served files with those name.
You can check your public directory, and update the list of files to ignore on the next deploy using the firebase.json file (see this doc). You can also download all the files that Firebase Hosting is serving for you using this script.

Is config.xml is created after the domain creation or when admin server starts for the first time in Weblogic?

I read this in the official documentation but I am confused about config.xml.
When you start a server instance in a domain for the first time, WebLogic Server creates the following subdirectories in the domain directory:
Files containing security information
logs directory for storing domain-level logs
For each server running in the domain, a directory for storing server logs and HTTP access logs
The config.xml file is created during the domain creation.

FTP transfer server to server using SSH/command line

I have a bunch of vendors that make their FTPs available to download images of their products. Some of these guys like to put them into multiple subfolders, using the collection or style name and then sku. For example, they will make folder structure like:
Main folder
---> Collection A
------> Sku A
----------> SKUApicture1.jpg, SKUApicture2.jpg
------> sku B
----------> SKUBpicture1.jpg, SKUBpicture2.jpg
---> Collection B
------> Sku C
----------> SKUCpicture1.jpg, SKUCpicture2.jpg
------> sku D
----------> SKUDpicture1.jpg, SKUDpicture2.jpg
Until now, I have found it easiest to log onto my server via SSH, navigate to the folder I want, and then log on to my vendor's FTP, at which point I put in the user name a PW and navigate to the folder I want, and then take all the images using mget. If all (or most) of the files are in 1 folder, this is simple.
The problem is mget won't take and folders or subfolders, it will only take files within the given folder. In the above example, my vendor has over 10 folders and each one has 100+ subfolders, so navigating to each one isn't an option.
Also, the industry I deal in isn't tech savy, so asking their "tech people" to enable/allow SCP, SFTP, or rsync, etc., is likely not an option.
Downloading all the images locally and re-uploading them to my server also isn't practical, as this folder is over 10GB.
I'm looking for a command (mget or other) that will enable me to take ALL files and subfolders, as is, and copy straight to my server (via SSH).
Thanks
NOTE: For this particular server I tried rsync, but got an error telling me it wasn't compatible with that command. I doubt I have the command wrong, but if you want to post the proper way to rsync I'll be more then happy to try it again and provide the exact error
Have you tried something like
wget -r ftp://username:password#ftp.example.com/
It should recursively get all the files from the remote ftp.
You can use the lftp:
lftp -e 'mirror <remote download dir> <local download dir>' -u <username>,<pass> <host>
Taken from Copying Folder Contents with Subdirectories Over FTP.
Have you considered using SFTP? You said that FTP works how you want it to work, and SFTP works the exact same way. Your FTP client with SFTP support behaves the exact same way but it's using SSH to connect.

How to set up JAVA_OPTS for AspectJ to work in Tomcat running as a service on a Windows Server?

Problem
I need to integrate AspectJ code into an existing application running on Tomcat, but I think I am not setting JAVA_OPTS correctly.
Our vendor has created some AspectJ code that passes logged in user id information to the CONTEXT_INFO() object within MSSQLServer Connection. This is so that within an audit database trigger that we created, we can capture the user id that made the change.
What I have done
Added the following code to our database trigger
DECLARE #appUserID INT
SET #appUserID = ISNULL(REPLACE(CONVERT(VarChar(128), CONTEXT_INFO()),CHAR(0), ''), '0');
Added aspectjrt.jar to the web application WEB-INF\lib folder.
Added vendorAspectJCode.jar to the web application WEB-INF\lib folder.
Added aspectjweaver.jar to tomcat's lib folder \tomcat7.0.27\lib
Edited catalina.bat with the following:
there is a line of code that looks like this:
set JAVA_OPTS=%JAVA_OPTS% %LOGGING_CONFIG%
I have changed that to
set JAVA_OPTS=”%JAVA_OPTS% %LOGGING_CONFIG% -javaagent:D:\tomcat\tomcat7.0.27\lib\aspectjweaver.jar"
but it did not seem to work.
So then I have tried setting it like that, adding a new set JAVA_OPTS:
set JAVA_OPTS=%JAVA_OPTS% %LOGGING_CONFIG%
set JAVA_OPTS="-javaagent:D:\tomcat\tomcat7.0.27\lib\aspectjweaver.jar"
but that did not seem to do the trick either
After making the following changes and running a test through the web application front end, the user id that was inserted into the database was 0, so that tells me that something has not been done right and the part that I feel less comfortable with all of the steps above was Step 5.
Does anybody know if the syntax for setting JAVA_OPTS is correct?
or whether there is another place to put it?
After a lot of trial and error I found out how to integrate AspectJ in Tomcat running as a Service on a Windows server. I do not know why, but the bolded stuff was the cause to my problems.
Of course, as I mentioned in my question above you need the following prerequisites:
Add aspectjrt.jar to the web application WEB-INF\lib folder.
Add vendorAspectJCode.jar to the web application WEB-INF\lib folder.
Add aspectjweaver.jar to tomcat's lib folder \tomcat7.0.27\lib
Setting -javaagent:PathToMyAspectjweaver\aspectjweaver.jar in the service.bat did not work. So I had to set it in the registry along with uninstalling/installing the Tomcat service for changes to be picked up by doing as follows:
First I recommend turning UAC off and to make sure that you are an Administrator
Stop the Tomcat service if running.
Delete the tomcat service.
Verify in Windows Services that the service is no longer there.
Verify the Windows registry that everything related to the service got deleted. If not, do so manually.
Install the Tomcat service.
Verify in Windows Services that the service got created.
Find the service in the registry and edit the variable Options apppending the following:
-javaagent:PathToMyAspectjweaver\aspectjweaver.jar
I have created a couple of bat files for these steps. Steps 2 and 3 would look something similar to this below (TomcatServiceUninstall.bat):
echo OFF
ECHO Removing Tomcat Service...
sc stop YourServiceName
sc delete YourServiceName
ECHO Removing Registry Key containing config data for Tomcat7
REG DELETE "HKLM\SOFTWARE\Wow6432Node\Apache Software Foundation\Procrun 2.0\YourServiceName" /f
REG DELETE "HKLM\SOFTWARE\Wow6432Node\Apache Software Foundation\Tomcat\7.0" /f
ECHO Uninstall Complete - File Directories remain intact.
Step 6 would look like that (TomcatServiceInstall.bat)
ECHO OFF
ECHO Running Service.bat to install the Tomcat 7 - YourServiceName - Service
cd "C:\Path to your tomcat\tomcat7.0.27\bin"
service.bat install

Jenkins - Publish Over CIFS Plugin

I am getting confused with this plug in.
Basically my target is to Deploy files from Server1 to Server2
Now the buildoutput dir is in a specific location in Server1
example: E:\BuildOutput\Apps\Application1\Bin\Release\
I need to deploy them in Server2: C:\Program Files\Tools\Application1\Bin\
How do I set up this plugin to work to what I need?
I am getting stressed in the amount of files that needs to be deployed to another server, I just wished a simple xcopy tool to another server could work.
I am looking for plugin if not this, to basically deploy only the files that has been changed to another server for automated feature testing.
Any methods will do too, that's if possible.
XCOPY should work fine. You need to create a share on Server2 in the desired location
Go to the Jenkins configuration and click "Add build step"->"Execute Windows batch command"
You should be able to execute any DOS commands you need there.
XCOPY E:\BuildOutput\Apps\Application1\Bin\Release\my_app.exe \\SERVER2\Share
If you don't want to share your applications bin directory:
Make a different share on Server2
Configured build to XCOPY to the new share
Add Server2 as a build node (Manage Jenkins->Nodes)
Create a new build job to move the files where you want them
Tie the new job to Server2 build node (Check the box "Restrict where this project can be run" in the job config
If your account has admin rights on Server2 you can just connect to the admin share of the C: drive like this:
XCOPY E:\BuildOutput\Apps\Application1\Bin\Release\* \\SERVER2\c$\Program Files\Tools\Application1\Bin\