Virus scan xcode or eclipse project - eclipse

Is is possible for an Xcode or Eclipse project to contain malicious code that would render the computer the project is being opened up on susceptible to a virus? Or assuming the answer is definitely; are there ways to protect against this?
Specifically, if I downloaded a project from a repository from an unknown source, could it open up security issues on my machine? Is there virus software which can scan for this?

Yes, any project that you download and run on your machine executes just as any other normal process, thus it can do whatever any other desktop application can do, including malicious actions like installing a virus or sending spam from your machine.
There are virus scanners which can scan any file on access, i.e. whenever a file is read from your harddisk, it is scanned directly before. Such a scanner can prevent known malware from executing through this way. I do not have a recommendation for a specific scanner, but google will help you to find the right one for your needs.

Related

How to make Kaspersky trust programs developed with an IDE (e.g.-NetBeans, Eclipse, etc)

I have searched here, Kaspersky Forums, and Googled but can't get a clear answer.
I am developing a program in NetBeans which means I recompile/relink every few minutes. Kaspersky continually interrupts the test runs so that I can choose to trust the application.
So, how do I get Kaspersky to trust applications I develop with an IDE? Particularly, in the Settings > Application Control > Manage applications section, is there something I have to set for NetBeans? or is there something I have to set for the exe of the developed program?
Normally antivirus software allow to discriminate paths from realtime analysis. I think Kaspersky allows you to do the same thing. But antivirus are antivirus, they are above the user.
Here https://support.kaspersky.com/us/11386#block2 you can find the explanation.

Domino 8.5.3 - Create an organization extension library / codestore

This is a project I've been working on off and on for months and I feel like I'm pretty close, but I just can't seem to get past the final hurdle.
The goal is to develop an organization extension library that contains both internal and 3rd party code that we frequently rely on.
History
As a test project, I started with Apache Poi because that is already in wide use in our environment. I have a plug-in and feature built just from the Poi .jars that allows me to build our current Poi applications as long as I add the plug-in (from my workspace) to my build path. The apps work on the servers because we have already distributed the Poi .jars by manually copying them.
The next step is taking that plug-in and getting it into an updatesite so that all of the servers and developers can synchronize on one version. I found and followed these two excellent blog articles (that I wish existed when I started this project):
http://www.dalsgaard-data.eu/blog/wrap-an-existing-jar-file-into-a-plug-in/
http://www.dalsgaard-data.eu/blog/deploy-an-eclipse-update-site-to-ibm-domino-and-ibm-domino-designer/
With the caveat that the articles are written for Domino 9 and we are running 8.5.3 here, but that only matters in the last (installation) step.
Current
This brings us to the problem. All of the above seems to have worked great up to a point. I can install my feature to my designer client from the eclipse update site and it works great. However, the install is failing when I import that into our updatesite.nsf database. This means that while the developers can all install from the updatesite if I put it on a network drive, that doesn't deploy updates to our servers.
The problem is that when I try to install from the .nsf update site, the Eclipse Updater just hangs. I've let it go for well over an hour and eventually Notes becomes completely unresponsive.
So the question is, is there anything I might have done wrong, either in the development of the plug-in or server configuration that might be causing this issue?
Additional Info
I'm looking at the osgi console and that is largely unhelpful. I am getting the following errors as I'm trying to install: SEVERE Could not access digest on the site: no protocol: 0/5B004DDD5E38F3FF85257CAF004C72C7/$file/digest.zip ::class.method=unknown ::thread=Worker-7 ::loggername=org.eclipse.update.core
I could generate dumps if that would be useful.
Security is also locked down fairly tight here. It could be a security issue - is there a way to troubleshoot that? Once I get to the hang I'm just stuck guessing.
This has been edited for clarity and to update information
I know that this is post is over 5 years ago but...
for those that find this and are trying to resolve the error
SEVERE Could not access digest on the site: no protocol: "
is due to the update site project not having the URL of the Domino updatesite.nsf not being added to the Archives tab of the site.xml.
I found the updatesite.nsf also needs to be anonymously accessible as no credentials are prompted/passed through to the Domino server hosting the updatesite.nsf database (at least from DDE), YMMV from eclipse. So if Anonymous connections are blocked on the Domino server you will be out of luck.
To develop a plug-in you really want to have 3 projects:
the plug-in
the feature
the update site
Of course a feature can contain more than one plug-in (and probably should) and a update site can contain more than one feature (and probably should). Once you have an update site project it features a handy button "build all" that makes sure plug-in, feature and update-site get compiled in one go. And that button is what you really want.
You can point using a setting in your Domino Designer (or local Domino server) to the feature directory. Add a plain text .link file to framework/rcp/eclipse/links, that contains the path to your install site - it then picks up the features and plug-ins from there. After a build you would need to restart designer/server to activate the updated feature.
For the Domino server the approach using an updatesite.nsf and the respective notes.ini setting makes the most sense (to me). http restart required. Lazy people script the whole thing.
I still don't have a great answer for this, but I believe the issue is related to the environment here. I don't have the authority to change the environment, even if I were able to conclusively demonstrate it is the cause of this problem, so it is a moot point. All I can say is that at least one administrator computer had no issue installing from the update site.
For me, the solution for distributing the update site is to put it on a network drive and have everyone install it from there. The server has no problem using it from the updatesite.nsf.

Two Eclipse running on two different system will share same workbench

I have two system, one in my office and one in my home. I am working on one Java application. I am facing one problem which is, after completing work in office I need to do it at home. For this before closing the eclipse, I copy the complete project in pendrive then I copy it into my home system, and then able to work from home and able to start from the place where I left the program in office. Same task I need to do, now from home to office.
Is there any eclipse plug-in or any other way available by which I will able to synchronize both the workbench.
There are some plug-in avilable like SVN, CVS but these plugin require one server, static IP address etc which is costly.
Example:- Google Drive
if you install google drive on two different system with same google account and if you do any change in one system then this change will reflect on other system also.
Edited:If you are using a personal computer at work or if the office computer allows it, you can use Dropbox.Create the project in Dropbox and then when at work,all you need to do is import the project (do not copy into workspace).What ever changes you make is persisted in Dropbox.
It sounds like what you need is a version control system, and one that is available as a free service. This allows you to store the code on an external server and have it reachable both from work and home.
Git is very popular these days for good reasons. It has a good Eclipse plugin, Egit, that comes preinstalled in later Eclipse releases. There are several external repositories that you can use, see this question, or just Google. Many offer free hosting for small projects.
This will require a bit of a learning curve, but it will help you greatly.
I use a small (pocket size) external drive. I have eclipse and my workspace on it (and other tools I need) - I can easily plug it into my work or home PC (or client PC if traveling). It works great - just assign it the same drive letter on both home and work PC.
I would also recommend you use a code repository in addition to an external drive to store the source code - CVS, SVN, Git, etc.

What strategy do you use to sync your code when working from home

At my work I currently have my development environment inside a Virtual Machine. When I need to do work from home I copy my VM and any databases I need onto a laptop drive sized external USB drive. After about 10 minutes of copying I put the drive in my pocket and head home, copy back the VM and databases onto my personal computer and I'm ready to work. I follow the same steps to take the work back with me.
So if I count the total amount of time I spend waiting around for files to finish copying in order for me to take work home and bring it back again, it comes to around 40 minutes! I do have a VPN connection to my work from home (providing the internet is up at both sites) and a decent internet speed (8mbits down/?up) but I find Remote Desktoping into my work machine laggy enough for me to want to work on my VM directly.
So in looking at what other options I have or how I could improve my existing option I'm interested in what strategy you use or recommend to do work at home and keeping your code/environment in sync.
EDIT: I'd prefer an option where I don't have to commit my changes into version control before I leave work - as I like to make meaningful descriptive comments in my commits, committing would take longer than just copying my VM onto a portable drive! lol Also I'd prefer a solution where my dev environment stays in sync too. Having said that I'm still very interested in your own solutions even if they don't exactly solve my problem as best as I'd like. :)
A Distributed / Decentralized Version Control System solution will suit your needs, Git, Bazaar, Mercurial, darcs... you have plenty alternatives.
Use a version control software like SVN, SourceOffSite, etc. You just have to check-in all your changes and get the latest changes when you want to sync.
Or you can use Windows Live Sync -> https://sync.live.com/foldersharetolivesync.aspx
Hasn't anyone recommended rsync? Use an rsync client to send the diff between files. You can apply these diffs thus bringing your file up-to-date. For the smallest file transfer it's probably the best idea.
I simply use an external portable notebook drive and do all my work on that. All my PCs have it set to the same drive letter. So no copying anything .. I've not attempted to run VMs this way, however, but I don't see any reason it shouldn't simply work.
i use dropbox.
We use Citrix and then I do a remote desktop connection to my PC at work. It is not the fastest solution in the world, but it does eliminate the problem of keeping two or more workstations up-to-date.
Here is a solution I use.
Set up a VPN between the office network and the laptop.
Install the VisualSVN Server
Load all projects in the SCC.
When at the office I check out a project, work on it and then check it in. When at home or around the world I connect to the office via VPN, check out my project, do my thing then check it in. Via the VPN connection I can also RDP to my dev boxes and or servers.
Hope this helps. Good luck!
I either connect remotely to the office SVN, or VPN in and remote desktop my dev or desktop machine and carry on working. It's very rare I sync any files, but when I do it's usually with DropBox (although you can't really do that with large files).
Write program, that will syncronize all your data through internet, and then shutwodn your computer, so at the end of the day you launch it, and go home, and when you come home all data is already there
We work with a distributed team, so it is vital everyone has easy and secure code repository access. For this, we use SVN over ssl/https. It works great, reliably and secure.
Depending on the VM software you are using why don't you set up 2 different VM disks, keep your user profile/dev files on one disk and the OS and other programs that change rarely on the other.
This way you can probably get away with only having to copy the larger disk image when you've installed something new and end up only copying a single virtual disk containing your work.
Just setup a SVN server at home, forward your router port and get on with your life. rsync is also a good, fast solution. Just remember to use it over SSH.
I had a similar problem. But fortunately we had a source control server (TFS) configured so I use to work only from the local Virtual Machines stored on my external drive and than check in the required files to the TFS as an when required.
you haven't specified the OS and virtualization system, but if you're working VM images that can be mounted, e.g. XEN on linux, then you could mount the image and sync it via rsync.
i connecting to the office net work and download the lates version form svn
use the Dev mysql server
so i am just like anther computer in the office network
I imagine that most of the time spent copying involves the database. Is that right? If so, can you not simply connect to your work DB from home using your VPN connection?
You would still copy your source files (or use a source code control system as others have suggested), but this would only take a fraction of the time.
If all you need is a virtual machine from your work computer, then you could mount a remote catalog (using nfs or smb) where is your virtual machine files store and run that virtual machine from there. This should be faster than using remote desktop.
I also use DropBox, and that is key because it is important to keep it simple.
It is generally better if you can have some type of remote desktop ability, because this will allow you to use a standard workstation configuration, and it will allow for consistent connection to network resources (database server, business servers like workflow, etc).
Working offline, in my opinion, is ok for certain tasks, but overall there are obstacles for systems which connect to other resources (unless you plan to move those resources to your home box).
It was a problem for me too. So, the company bought me a laptop, and I do my work on it, at home or anywhere else.
I have a set up where a folder on one machine is synced to a folder on another machine. any changes to the contents on one machine is also made on the other machine within a minute.
So you could sync the top level folder of your work files, and have then sync to your home machine. What I like about this is that syncing is completely transparent. As far as the user experience goes, I'm simply using the file system. No external app to interact with.
I use Live Sync Live Sync from Microsoft to this. You'll need to create a Windows Live ID to use this system. It works for windows and macs.
Dropbox and Microsoft's Live Sync are good options that have already been mentioned. My personal favorite is Live Mesh, also from Microsoft. The one great feature that puts it above the other two, in my mind, is the ability to specify which folders get synched on which computers, and where the folders are located. So, for example, I synch my Visual Studio 2005/Projects folder between my work machine and my dev box at home, and I synch Visual Studio 2008/Projects between my side gig VM and my home dev box.
i have a macbook with all my dev software on it; when i go to work, i start it in target firewire mode and plug it into my work macpro with the fast processor, lan connection, big monitor, etc. this way i never have to leave my user folder but i have access to all the software and hardware available at work.
Why don't you just use version control? A DVCS?
Find here a tutorial on DVCS for Windows users (very simple)
http://codicesoftware.blogspot.com/2010/03/distributed-development-for-windows.html
Some ideas:
Use network storage (with SSD cache if speed is a concern), either for your code or to host your VM.
Separate data and OS into two virtual disks in your VM.
Google drive, Onedrive, Dropbox etc.
If you use Visual Studio (Code), try the Live Share extension.
Dockerize your environment. Alternatively, I keep a bash script for all the setup I did, so I could almost one-click reinstall my dev environment anywhere.
Use a second version control, covering your whole work directory. Commit and push everything before switching environments, then pull and hard reset your commit in another machine.

How to version control the build tools and libraries?

What are the recommendations for including your compiler, libraries, and other tools in your source control system itself?
In the past, I've run into issues where, although we had all the source code, building an old version of the product was an exercise in scurrying around trying to get the exact correct configuration of Visual Studio, InstallShield and other tools (including the correct patch version) used to build the product. On my next project, I'd like to avoid this by checking these build tools into source control, and then build using them. This would also simplify things in terms of setting up a new build machine -- 1) install our source control tool, 2) point at the right branch, and 3) build -- that's it.
Options I've considered include:
Copying the install CD ISO to source control - although this provides the backup we need if we have to go back to an older version, it isn't a good option for "live" use (each build would need to start with an install step, which could easily turn a 1 hour build into 3 hours).
Installing the software to source control. ClearCase maps your branch to a drive letter; we could install the software under this drive. This doesn't take into account non-file part of installing your tools, like registry settings.
Installing all the software and setting up the build process inside a virtual machine, storing the virtual machine in source control, and figuring out how to get the VM to do a build on boot. While we capture the state of the "build machine" with ease, we get the overhead of a VM, and it doesn't help with the "make the same tools available to developers issue."
It seems such a basic idea of configuration management, but I've been unable to track down any resources for how to do this. What are the suggestions?
I think the VM is your best solution. We always used dedicated build machines to get consistency. In the old COM DLL Hell days, there were dependencies on (COMCAT.DLL, anyone) on non-development software installed (Office). Your first two options don't solve anything that has shared COM components. If you don't have any shared components issue, maybe they will work.
There is no reason the developers couldn't take a copy of the same VM to be able to debug in a clean environment. Your issues would be more complex if there are a lot of physical layers in your architecture, like mail server, database server, etc.
This is something that is very specific to your environment. That's why you won't see a guide to handle all situations. All the different shops I've worked for have handled this differently. I can only give you my opinion on what I think has worked best for me.
Put everything needed to build the
application on a new workstation
under source control.
Keep large
applications out of source control,
stuff like IDEs, SDKs, and database
engines. Keep these in a directory as ISO files.
Maintain a text document, with the source code, that has a list of the ISO files that will be needed to build the app.
I would definitely consider the legal/licensing issues surrounding the idea. Would it be permissible according to the various licenses of your toolchain?
Have you considered ghosting a fresh development machine that is able to build the release, if you don't like the idea of a VM image? Of course, keeping that ghosted image running as hardware changes might be more trouble than it's worth...
Just a note on the versionning of libraries in your version control system:
it is a good solution but it implies packaging (i.e. reducing the number of files of that library to a minimum)
it does not solves the 'configuration aspect' (that is "what specific set of libraries does my '3.2' projects need ?").
Do not forget that set will evolves with each new version of your project. UCM and its 'composite baseline' might give the beginning of an answer for that.
The packaging aspect (minimum number of files) is important because:
you do not want to access your libraries through the network (like though dynamic view), because the compilation times are much longer than when you use local accessed library files.
you do want to get those library on your disk, meaning snapshot view, meaning downloading those files... and this is where you might appreciate the packaging of your libraries: the less files you have to download, the better you are ;)
My organisation has a "read-only" filesystem, where everything is put into releases and versions. Releaselinks (essentially symlinks) point to the version being used by your project. When a new version comes along it is just added to the filesystem and you can swing your symlink to it. There is full audit history of the symlinks, and you can create new symlinks for different versions.
This approach works great on Linux, but it doesn't work so well for Windows apps that tend to like to use things local to the machine such as the registry to store things like configuration.
Are you using a continuous integration (CI) tool like NAnt to do your builds?
As a .Net example, you can specify specific frameworks for each build.
Perhaps the popular CI tool for whatever you're developing in has options that will allow you to avoid storing several IDEs in your version control system.
In many cases, you can force your build to use compilers and libraries checked into your source control rather than relying on global machine settings that won't be repeatable in the future. For example, with the C# compiler, you can use the /nostdlib switch and manually /reference all libraries to point to versions checked in to source control. And of course check the compilers themselves into source control as well.
Following up on my own question, I came across this posting referenced in the answer to another question. Although more of a discussion of the issue than an aswer, it does mention the VM idea.
As for "figuring out how to build on boot": I've developed using a build farm system custom-created very quickly by one sysadmin and one developer. Build slaves query a taskmaster for suitable queued build requests. It's pretty nice.
A request is 'suitable' for a slave if its toolchain requirements match the toolchain versions on the slave - including what OS, since the product is multi-platform and a build can include automated tests. Normally this is "the current state of the art", but doesn't have to be.
When a slave is ready to build, it just starts polling the taskmaster, telling it what it's got installed. It doesn't have to know in advance what it's expected to build. It fetches a build request, which tells it to check certain tags out of SVN, then run a script from one of those tags to take it from there. Developers don't have to know how many build slaves are available, what they're called, or whether they're busy, just how to add a request to the build queue. The build queue itself is a fairly simple web app. All very modular.
Slaves needn't be VMs, but usually are. The number of slaves (and the physical machines they're running on) can be scaled to satisfy demand. Slaves can obviously be added to the system any time, or nuked if the toolchain crashes. That'ss actually the main point of this scheme, rather than your problem with archiving the state of the toolchain, but I think it's applicable.
Depending how often you need an old toolchain, you might want the build queue to be capable of starting VMs as needed, since otherwise someone who wants to recreate an old build has to also arrange for a suitable slave to appear. Not that this is necessarily difficult - it might just be a question of starting the right VM on a machine of their choosing.