I can't wrap my head around how I'm supposed to use ColdFusion Builder 3 (akin to Eclipse).
Up until this point, I've been using Dreamweaver 5, which is getting 'long-in-the-tooth', and I wanted to give CF Builder a try.
So, in Dreamweaver, it's pretty simple: you setup connections to servers using credentials... There's a Local path, which is the local copy of your code, and the webroot of the Server which is the 'live' copy of your code. Basically, you make a change to the local copy, and PUT the change to the Server. Easy peasy lemon squeezy, right?
But, how does this translate to ColdFusion Builder 3?
Just to give you an idea of our infrastructure.... we have Development and Production. Each of these boxes has multiple web instances, example: Accounting, Human Resources, IT. Each of those web instances could have multiple applications.... I'm only considered about my instance, IT, on both the Production and Development servers.
Is a workspace supposed to represent an instance on a web server?
In CFBuilder, should I configure 1 server per web app?
Is a project supposed to represent a web app?
Am I supposed to use drive mappings to the inetpub wwwroot for access to web applications? Is it even considered kosher to have a drive mapping to the web root? \server\c$\inetpub\wwwroot
Where do I keep my local copy of my code?
How do I move items from Development to Production?
My main confusion is with workspaces, projects, and servers... My intent is to debug and 'view page in browser' from CFBuilder.... However, when you setup a server, under Server Mapping and URL Prefix, you're supposed to indicate the Local and Remote paths, plus this is not directly related to the physical location of the project.... and as I've mentioned, there's multiple instances, multiple applications, and the development box is not my local machine, it's a remote server...
I would really like to know how others have made this work for them.
I really don't mind this question even though it's not directly code related because I've been using ColdFusion Builder (CFB) for years and there just isn't enough good documentation out there. I now enjoy a great experience with CFB thanks to blog posts and sharing experiences with other devs :)
My setup: CFB3 running on Windows 8.1, dev server running on a Virtual Machine so it is treated as "remote server" just like yours. I also update remote staging and production servers (although not directly from CFB).
First, let's set some reasonable expectations: Dreamweaver and CFB are very different in that CFB focuses on programming and Dreamweaver on design. CFB is built on eclipse and therefore has the advantage of benefiting from most eclipse plugins.
Your question is specifically about how to set up your projects in CFB using 2 remote servers (dev and prod). It's different for everyone but I'll share my setup with you. (sidenote: My projects are also stored in Git repositories - 1 repo for every app)
Starting from the top: A workspace in CFB deals with your whole eclipse application, not just your projects. The most important things kept in this directory are snippets and plugins. You do NOT need to keep your project files in here. This is merely the main directory where all of your settings are kept. You are not required to have more than 1 workspace (I only have one). Why would you need more than one? You may be multifaceted programmer who needs to keep separate workspaces using separate tools (like different plugins, snippets, window layouts...)
To answer your next question (1 server per web app), all you need to to is configure your dev servers in the "CF Servers" tab. You need to add 1 server per web instance for every instance that you'd like to test on. Hopefully, your dev server has RDS enabled (very helpful for remote database and file viewing, just like in Dreamweaver). During configuration, don't worry about Mappings or Virtual Host Settings (I have another recommendation later). Once configured, you'll be able to assign that server to a project.
Drive mappings: I would never recommend mapping to the webroot of a shared dev server. If you were to use that drive map as your local directory, your changes will be made directly to the development server. What you want to do is create a new project by right clicking in the Navigator area and select Import > Other > FTP. Follow the steps, choose anywhere on your local drive to store the files, then choose "New project" at the end (this will add the .project file necessary for CFB to control the project).
Once the project is created, right click on it, select ColdFusion Project and choose the CFML Dictionary version you'll be using (CF11, 10, 9...). Then, select ColdFusion Server Settings and choose the dev server. This is necessary for testing.
What you now have is a local directory with your app and eclipse knows about the remote server. In order to synchronize, you right click on the project, go to Team and synchronize from there. For detailed information about synchronization over FTP, see the help section "Guide to WebDAV and FTP".
Moving to production is not as simple as it was in Dreamweaver. The FTP configuration information only allows for 1 connection (thus giving you a list of files synchronized between your project and the dev server). Therefore, you'll need a third party FTP client to synchronize between your local project and your prod server.
As promised, my last entry will be able the "debugging" which is why I said to skip the mappings and virtual host settings in CF Server config. I really, really recommend using a third party paid plugin called FusionDebug (http://www.fusion-debug.com/). This plugin facilitates the setup and allows you to step-into all of your code (which doesn't work so well in native CFB). There's a 30 day trial and I recommend you try before your buy (or license for a year in this case!)
Related
I'm working on a website using PhpStorm. For a long time I developed it locally, but then I got hosting and a remote ftp server.
I created a new project in PhpStorm with the settings for remote host, and I found that deploying code takes long time (over a minute) before I can see the result, which is quite uncomfortable when debugging.
Is there any possibility to work with code on a local server, and, when I think that the project is ready for deploy, just send it to the server.
I understand, that I can just work in two different projects and just deploy the "ready" version to server via FTP, but maybe there is some more comfortable way?
There is several answers to this question, and most of them opinion based but i will try and keep it objective.
Case 1
A big corporation gives every developer a sandbox, to test their code from, the corp requires every developer to keep their code on the sandbox.
Using mounted drives could be extremely slow. Especially when PhpStorm is indexing.
Case 2
An easy way to keep an auto backup of your code it to use the build in (s)ftp(s) upload/deploy.
Solution
In both cases you could use the auto deploy feature that saves every changes to the server, that way the deploy doesn't take over a minute, but is usually already there before you know it.
I cannot recommend to use the deployment for Production as it will not pass through your version control, SAT, security setups etc. In that case I would suggest something like rocketeer etc.
EDIT:
As for 2 projects, well you can define 2 different deployment servers, and use the default one for your testing, with auto upload or something, and then the other one can be selected from the deployment menu.
This is a project I've been working on off and on for months and I feel like I'm pretty close, but I just can't seem to get past the final hurdle.
The goal is to develop an organization extension library that contains both internal and 3rd party code that we frequently rely on.
History
As a test project, I started with Apache Poi because that is already in wide use in our environment. I have a plug-in and feature built just from the Poi .jars that allows me to build our current Poi applications as long as I add the plug-in (from my workspace) to my build path. The apps work on the servers because we have already distributed the Poi .jars by manually copying them.
The next step is taking that plug-in and getting it into an updatesite so that all of the servers and developers can synchronize on one version. I found and followed these two excellent blog articles (that I wish existed when I started this project):
http://www.dalsgaard-data.eu/blog/wrap-an-existing-jar-file-into-a-plug-in/
http://www.dalsgaard-data.eu/blog/deploy-an-eclipse-update-site-to-ibm-domino-and-ibm-domino-designer/
With the caveat that the articles are written for Domino 9 and we are running 8.5.3 here, but that only matters in the last (installation) step.
Current
This brings us to the problem. All of the above seems to have worked great up to a point. I can install my feature to my designer client from the eclipse update site and it works great. However, the install is failing when I import that into our updatesite.nsf database. This means that while the developers can all install from the updatesite if I put it on a network drive, that doesn't deploy updates to our servers.
The problem is that when I try to install from the .nsf update site, the Eclipse Updater just hangs. I've let it go for well over an hour and eventually Notes becomes completely unresponsive.
So the question is, is there anything I might have done wrong, either in the development of the plug-in or server configuration that might be causing this issue?
Additional Info
I'm looking at the osgi console and that is largely unhelpful. I am getting the following errors as I'm trying to install: SEVERE Could not access digest on the site: no protocol: 0/5B004DDD5E38F3FF85257CAF004C72C7/$file/digest.zip ::class.method=unknown ::thread=Worker-7 ::loggername=org.eclipse.update.core
I could generate dumps if that would be useful.
Security is also locked down fairly tight here. It could be a security issue - is there a way to troubleshoot that? Once I get to the hang I'm just stuck guessing.
This has been edited for clarity and to update information
I know that this is post is over 5 years ago but...
for those that find this and are trying to resolve the error
SEVERE Could not access digest on the site: no protocol: "
is due to the update site project not having the URL of the Domino updatesite.nsf not being added to the Archives tab of the site.xml.
I found the updatesite.nsf also needs to be anonymously accessible as no credentials are prompted/passed through to the Domino server hosting the updatesite.nsf database (at least from DDE), YMMV from eclipse. So if Anonymous connections are blocked on the Domino server you will be out of luck.
To develop a plug-in you really want to have 3 projects:
the plug-in
the feature
the update site
Of course a feature can contain more than one plug-in (and probably should) and a update site can contain more than one feature (and probably should). Once you have an update site project it features a handy button "build all" that makes sure plug-in, feature and update-site get compiled in one go. And that button is what you really want.
You can point using a setting in your Domino Designer (or local Domino server) to the feature directory. Add a plain text .link file to framework/rcp/eclipse/links, that contains the path to your install site - it then picks up the features and plug-ins from there. After a build you would need to restart designer/server to activate the updated feature.
For the Domino server the approach using an updatesite.nsf and the respective notes.ini setting makes the most sense (to me). http restart required. Lazy people script the whole thing.
I still don't have a great answer for this, but I believe the issue is related to the environment here. I don't have the authority to change the environment, even if I were able to conclusively demonstrate it is the cause of this problem, so it is a moot point. All I can say is that at least one administrator computer had no issue installing from the update site.
For me, the solution for distributing the update site is to put it on a network drive and have everyone install it from there. The server has no problem using it from the updatesite.nsf.
I have two system, one in my office and one in my home. I am working on one Java application. I am facing one problem which is, after completing work in office I need to do it at home. For this before closing the eclipse, I copy the complete project in pendrive then I copy it into my home system, and then able to work from home and able to start from the place where I left the program in office. Same task I need to do, now from home to office.
Is there any eclipse plug-in or any other way available by which I will able to synchronize both the workbench.
There are some plug-in avilable like SVN, CVS but these plugin require one server, static IP address etc which is costly.
Example:- Google Drive
if you install google drive on two different system with same google account and if you do any change in one system then this change will reflect on other system also.
Edited:If you are using a personal computer at work or if the office computer allows it, you can use Dropbox.Create the project in Dropbox and then when at work,all you need to do is import the project (do not copy into workspace).What ever changes you make is persisted in Dropbox.
It sounds like what you need is a version control system, and one that is available as a free service. This allows you to store the code on an external server and have it reachable both from work and home.
Git is very popular these days for good reasons. It has a good Eclipse plugin, Egit, that comes preinstalled in later Eclipse releases. There are several external repositories that you can use, see this question, or just Google. Many offer free hosting for small projects.
This will require a bit of a learning curve, but it will help you greatly.
I use a small (pocket size) external drive. I have eclipse and my workspace on it (and other tools I need) - I can easily plug it into my work or home PC (or client PC if traveling). It works great - just assign it the same drive letter on both home and work PC.
I would also recommend you use a code repository in addition to an external drive to store the source code - CVS, SVN, Git, etc.
I am using Tortoise1.6, SubversionEdge for SVN CMS and FileZilla3 (Test Server has CentOS as Server).
Let's assume the scenario:
- Test Server exists - here, developers have direct access; used for user testing
- There are 3 members in a team
- 2 of the members are developing on their local machine using TortoiseSVN
- But 1 wants to develop directly on the Test Server
--> The issue on developing directly on the Test Server are:
1.) No TortoiseSVN installed
2.) Even if SVN exist in TestServer, command scripts are tedious since it is running on CentOS (no GUI)
This issue can be resolved with team management, but the challenging part in here is how to address the technical issue (as this is maybe a future need).
QUESTION
So, my question is - is there a way to integrate TortoiseSVN on FileZilla?
Or a way that after committing changes on the working copy, files in the Test Server are also updated?
If you were on my situation, how would you address such issue aside from just team mgt/agreement?
Tortoisesvn is an explorer shell-extension. It doesn't know how to access Filezilla's functions to access the files on the ftp server.
What you can do is using a smb-share over a vpn-network. TortoiseSVN ist then able to directly see the files and display them correctly in explorer - although this solution may be quite slow depending on your network-connection.
However, what I usually do is developing locally, connect via ssh to the server and then use the svn cmd-line utility to do updates.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
How do you manage your project life cycle?
For example: Do you start with a template? Do you use versioning such as SVN as the authoritative source? Do you archive the projects, if so when and how? When a project is revived (work resumes), how’s that handled? Do you use automated scripts to do things such as create IIS sites, DBs, archive, launch, etc?
Of particular interest is management of many projects at varying points of development.
Development: We do not start with a template, because the world changes quick enough to make template maintenance a full time job. We do encourage everybody to use the same IDE (Eclipse), so that they can help eachother with their environments.
Project Management: We are using GForge to manage our projects. Sourceforge is slightly better, but GForge is much cheaper and has a different licensing fee model. GForge incorporates CVS, SVN, Document storage, Issue trackers and integrates everything nicely. This makes it easy to see where the project is at. Open issues, and closed issues with connected code changes, everything is integrated.
Versioning: Although we tried SVN, we switched back to CVS because it fits our needs better and works fine.
Backups: Our GForge server, housing all our projects and sourcecode, is running on a VMWare EX server. Backups are done daily on VM level and we make VM snapshots if we feel that we need more frequent restore points for some reason.
Reviving projects: This is very common in our business. Every project has all it's libraries and build requirements in CVS. The project always has an up-to-date development manual which describes all the steps to get a development environment running, and has a chapter with all the things which are not default, to pay attention to. We try to build software in an as-default-as-possible environment so that developers don't have to spend days tweaking their settings.
Nearly all projects are built using Maven, which also makes life easy for our developers. Ususally reviving a projects only takes a few steps:
Download eclipse
Connect to CVS over SSH (extssh is built into Eclipse)
Check out the project (default "Check Out" option)
Run "Maven Eclipse" and refresh Eclipse
Run unittests in Eclipse to see if everything is working.
Builds: All our projects are built on a seperate build server. Every morning the build server does a complete build and tags CVS if all unittests succeed. During the day, hourly builds are made and when there are failures the team automatically gets an email. Usually we use one build server per project, and it is a simple luntbuid server (Linux, Tomcat, Luntbuild).
Both the buildserver and sometimes even the developer machines are VM's. This makes reviving a project really easy. Get the VM from the fileserver, start it up, and you're good to go.
The build server creates daily sites which show unittest coverage statistics, complexity measurements, CVS activity and developer activity (who changed what and when).
All our software comes with self-building database scripts built in. Point the config file to the database, start the software, and it figures out what it needs to do to the database itself. This really comes in handy because the buildserver can just start the software. No special steps needed. Our customers are also happy, they never need to worry about their database, or upgrade scripts.
The whole project lifecycle is managed, documented and tracked in GForge, with the addition of some external spreadsheets for budget tracking because that's simply easier.
Wether you have an integrated project server or not, I think it is really important to have a system. This enables you to switch developers between projects without them getting lost. It saves time. Particularly when a customer comes back to you after 2 or 3 years for modifications on old software (yes, that happens).
All the stuff we use is open source (you can even use an open source fork of GForge). It's not in the tools, it's how you use them.
It would depend on the nature of the work. When working at home for private clients, I start by opening a folder for the client with a bunch of standard documents, which I customize, such as contracts, invoices, reports, requirements, testing, code repository, etc. As the project develops, I add/modify the directory as required.
If I had to go back to a project, I would reopen that directory, and for any non-common components, create a new directory. For example, if my client had a web application built, and now they need a second application, I would use the same directories for invoices and contracts and create new directories for the code base, requirements and testing.
In terms of backup, I archive the work at any point where I've reached a milestone, with the exception of code, which I back up daily at a minimum. At the end of each project, when I close a contract, I take the entire directory and compress it and store it on a remote server.
i create folders containning the project stages like "initialize software process" on were we placed docs like the bussiness proposal, we use another one for requirements another for the construction,releases,one for meetings minutes, and so on.
We keep those under a subversion repository but it really depends on what metodology you are using, also it depends on how do you handle the configuration management and how organize you want to be. and yes we use template for most of our artifacts so we assure in some way the quality of our products.
As for source code, we have it all in a Subversion repository. After each release, we make a branch - new features only get added to the current branch (on which the next release will be based), critical bug fixes are done in the current and the old branch (so we can deliver hotfixes for the version the customers currently have).
As for all documents belonging to a release - from the planning & resource sheets to specifications, testcases, user and technical manuals for the software we create etc. - we store them in a Sharepoint portal site. The advantages of this Sharepoint site is that users have access via a website (so no need to grant management access to your repository ;-), you can finely control the access rights, and you can turn on versioning. We also use tagging to mark whether a document belongs to a specific release (e.g. service pack xy) or product, or whether it is generally valid.
Concerning scripts, we use several to perform e.g. nightly build plus unit tests (we usually do that for the last and the current release), but also to deploy the complete software solution (including IIS site creation, database data model upgrade,...) on out test servers. These are nant scripts using lots of variables for paths, version numbers etc. so it is very easy to copy and modify them for a new release.