For the last couple of years, I've been developing a web application based on CodeIgniter. CI has served me well to date, but for the next generation of the software, I'm looking to move to PHP 5.3 and a more robust framework. I've watched FuelPHP since it showed up about a year ago and now that I'm getting to the point of starting the development of the next version of the application in earnest, I'm interested in giving FuelPHP a go.
My application relies on the use of multiple application directories. Essentially, there's a system application which has the system's core functionality, code that shouldn't be touched by admins because it'll be changed during updates. In addition, there's a user application directory where admins can extend and override system classes. This way, admins can customize the system without ever touching the system core (thus insulating them from losing their modifications when the system is updated). When a request comes in from the URL, I want the system to first check the user application directory. If it doesn't find the controller there, move on to the system application directory (where, in theory, it should find the file) and use that controller.
I don't want to make the mistake of approaching this problem from a CI or Kohana mindset, so what I'm wondering is what's the best way to go about doing this in FuelPHP? Since I don't have much experience with FuelPHP, I was hoping someone might be able to give me some pointers or shove me in the right direction.
Thanks!
FuelPHP has an 'app' folder that you can consider the core of your application. For smaller applications, it can also contain your application code.
For larger and/or more complex applications, use modules. A module has exactly the same folder structure as 'app', but lives in it's own namespace (= the module folder name). FuelPHP supports multiple module locations, so you could have a location that contains modules you share over different websites, and modules that are specific to your website.
Without any special routing, if the first segment in the URI is a module name, controllers from that module will be loaded.
Related
I work in a construction projects company developing autocad tools, mostly with the integrated vba editor.
The company wants to keep the developed dvb files stay inside the company, or somehow make them useless when they are carried outside.
So, I know that be password protecting the created dvb files, the code can be hidden (Although after 5 min of google search I discovered that it is trivial to unlock them.) I am trying to find a way that the developed vba files will be used and executed in office, however their codes will be hidden and the employees would not be able to use them out of office.
I am not sure if this is possible though. I know that if I develop external exe files I can use several methods(Connect to local server before running, use USB stick key etc..), however I wonder if I can guarentee that the codes I wrote in the AUTOCAD VBA editor will not be seen and can not be used outside office.
Thank you for all the help in advance.
P.S: Using Autocad 2010 on Windows 7 SP1
In short, you cannot completely protect your DVB source files. As you discovered, information on breaking the password protection is readily available and trivial for a tech savvy user to do.
If your goal is to prevent users from just taking the DVB file with them and using it elsewhere (without source modification), you can embed some checks into the code which will cause failure. For example, ping your Domain Controller by name and if no response it returned, stop with an error. This, however, could be removed if someone edited the code (see first point above).
If you do need protection on your source, you don't want to go the DVB (which is VBA code) route. Instead you will want to develop a true plugin with .NET (which would require a re-write). Of course this isn't foolproof either as .NET code can be easily decompiled to source; however if you run it through a good obfuscator it would make it difficult (but still possible) for even the most dedicated to modify.
In short, there is no way fully protect your source, only make it more difficult for someone to reverse engineer.
I'm almost done with our custom CMS system. Now we want to install this for different websites (and more in the future), but every time I change the core files I will need to update each server/website seperatly.
What I really want is to load the core files from our server, so if I install an CMS I only define the nedded config files (on that server) and the rest is loaded from our server. This way I can pass changes in the core very simple, and only once.
How to do this, or this a completely wrong way? If so, what is the right way? Thing I need to look out for? Is it secure (without paying thousands for a https connection)?
I have completely no idea how to start or were to begin, and couldn't find anything helpful (maybe wrong search) so everything is helpful!
Thanks in advance!
Note: My application is build using the Zend Framework
You can't load the required files from remote on runtime (or really don't want to ;). This problem goes down to a proper release & configuration managment where you update all of your servers. But this can mostly be done automatically.
Depending on how much time you want to spend on this mechanism there are some things you have to be aware of. The general idea is, that you have one central server which holds the releases and have all other servers check if for updates, download and install them. There are lot's of possibilities like svn, archives, ... and the check/update can be done manually at the frontend or by crons in the background. Usually you'll update all changed files except the config files and the database as they can't be replaced but have to be modified in a certain way (this is the place where update scripts come into place).
This could look like this:
Cronjob is running on the server which checks for updates via svn
If there is a new revision it'll do a svn-update
This is an very easy to implement mechansim which holds some drawbacks like you can't change the config-files and database. Well infact it'd be possible but quite difficult to achieve.
Maybe this could be easier with a archive-based solution:
Cronjob checks updateserver for a new version. This could be done by reading the contents of a file on the update-server and compare it to a local copy
If there is a new version, download the related archive
Unpack the archive and copy the files
With that approach you might be able to include update-scripts into updates to modify configs/databases.
Automatic updatedistribution is a very very complex topic and that are only two very simple approaches. There are probably very many different solutions out there and 'selecting' the right one is not an easy task (it does even get more complex if you have different versions of a product with dependencies :) and there is no "this is the way it has to be done".
We have an application at work which is web-based and comes with a bundled web server (Apache tomcat), and is for network monitoring/patch management. It allows for personalisation, all sorts of rules, custom UI design using proprietary components and definition language, and even custom code to fire on events (based on Java).
I am in a team of several developers, each of who will be customising this app to meet various requirements. As it's a server app, not a codebase, what's the best way to setup a dev environment for >1 user?
If there is one single shared VM with this app, I don't know how good source control like TFS would work with this sort of system? I think also, developers working on various parts of the project may even need the same file at the same time (though TFS does do multiple check-outs).
What is the best way to develop against this sort of product? Bare in mind, even with personal VMs and an instance of the app, changes have to be merged to one central instance. Something keeps making me think about how app-virtualisation could help with this?
Thanks
If it is just an instance of Tomcat (even though it was bundled) couldn't you put the whole Tomcat directory and all of its subdirectories under source control? You just need to check in the non-binary parts, so exclude all the .jar, .exe, .tar.gz and .dll files when you check in. That's what I would do, unless I misunderstood your question.
I'm not familiar with the source control software that you mentioned. I have been using SVN (which is free) and TortoiseSVN as a client (also free). Just an option if your software can't support what I've suggested.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Web developing isn't what it used to be. It used to consist of hacking together a few PHP scripts (I have nothing against PHP, actually it's currently my main programming language), uploading them via FTP to some webhost and that was that. Today, things are more complicated. As I can see by looking at a number of professional and modern websites (SO being the main one, I consider SO being a great example of good practice in web developing, even if it's made with ASP.NET and hosted on Windows), developing a website is much more than that:
The website code is actually in a repository (that little svn revision in the footer makes my nerdy feelings tingle);
Static files (CSS, JavaScript, images) are stored on a separate domain;
Ok, these were my observations. Now for my questions:
What do you do with JavaScript and CSS files? Do you just not keep them under version control? That would seem stupid. Do you create a separate repository for them?
How do you set up the repository? Do you just create one in the root of the web server? Or do you create some sort of post-commit trigger that copies the latest files to their appropriate destinations?
What happens if you have multiple machines running the website and want to push some changes to all of them?
Every such project has to have configuration files. These differ from the local repository to the remote one. For example, on my development machine I have no MySQL root password, while on the production server I certainly have a password. This password would be stored in a config file, amongst other such things, which would be completely different on my machine and on the server. Maybe they are different between production machines, too (like I said earlier, maybe the website runs on multiple machines for load balancing). How do I handle that?
I'm looking to start a new web project using:
Python + SQLAlchemy + Werkzeug + Jinja2
Apache httpd + modwsgi
MySQL
Mercurial
What I'd like is some best practice advice on using the aforementioned tools and answers to my questions above.
You're right, things can get complicated when trying to deploy a scalable website. Here are what I've found to be a few good guidelines (disclaimer: I'm a rails engineer):
Most of the decisions regarding file structure for your code repository are largely based upon the convention of the language, framework and platform you choose to implement. Many of the questions you brought up (JS, CSS, assets, production vs development) is handled with Rails. However, that may differ from PHP to Python to whichever other language you want to use. I've found you should do some research about what language you're choosing to use, and try to find a way to fit the convention of that community. This will help you when you're trying to find help on an obstacle later. Your code will be organized like their code, and you'll be able to get answers more easily.
I would version control everything that isn't very substantial in size. The only problem I've found with VC is when your repo gets large. Apart from that I've never regretted keeping a version of previous code.
For deployment to multiple servers, there are many scripts that can help you accomplish what you need to do. For Ruby/Rails, the most widely used tool is Capistrano. There are comparable resources for other languages as well. Basically you just need to configure what your server setup is like, and then write or look to open source for a set of scripts that can deploy/rollback/manipulate your codebase to the servers you've outlined in your config file.
Development vs Production is an important distinction to make. While you can operate without that distinction, it becomes cumbersome quickly when you're having to patch up code all over your repository. If I were you, I'd write some code that is run at the beginning of every request that determines what environment you're running in. Then you have that knowledge available to you as you process that request. This information can be used when you specify which configuration you want to use when you connect to your db, all the way to showing debug information in the browser only on development. It comes in handy.
Being RESTful often dictates much of your design with regards to how your site's pages are discovered. Trying to keep your code within the restful framework helps you remember where your code is located, keeps your routing predictable, keeps your code from becoming too coupled, and follows a convention that is becoming more and more accepted. There are obviously other conventions that can accomplish these same goals, but I've had a great experience using REST and it's improved my code substantially.
All that being said. I've found that while you can have good intentions to make a pristine codebase that can scale infinitely and is nice and clean, it rarely turns out this way. If I were you, I'd do a small amount of research on what you feel the most comfortable with and what will help make your life easier, and go with that.
Hopefully that helps!
While I have little experience working with the tools you've mentioned, except for MySQL, I can give you a few fairly standard answers for the questions you posted.
1) Depends on the details, but most often you keep them in the same repository but in a separate folder.
2) Just because something is commited to the repository doesn't mean that it's ready to go live - it's quite often an intermediary build that could be riddled with bugs. A publish is done manually, with an export from the repository. Setting up the webserver in the same folder as a svn checkout is a huge nono as the .svn folder contains quite a bit of sensitive information, such as how to push changes to the svn server.
3) You use some sort of NAS or SAN solution, or simply a network share on one of the servers, and read all your data from there. That way, when you push information to one place, it's accessible by all servers. If your network is slow, you set up scripts that pushes the files out to all the servers automatically from a single location. If you use a multi-server environment in ASP.NET, don't forget to update the machine key in the config files or your shared encrypted caches, like the viewstate, won't work across servers. Having a session store in a database is also a good idea.
4) I've got a post build step that only triggers on publish that replaces my database connectionstrings with production ones, and also changes my Production app config value from false to true in the published web.config/app.config files. I can't see any case where you'd want different config files for different servers serving the same content.
If something is unclear, just comment and I'll try to clarify.
Good luck! // Eric Johansson
I think you are mixing 2 different aspects, source control and deployment. Just because you have all your files in a single repository doesnt mean they have to be deployed that way. Its also arguable whether you should be deploying directly using source control or instead using a build/deploy script which could handle any number of configurations.
Also hosting static files on a seperate domain only really becomes worthwhile on high traffic websites. Are you sure you aren't prematurely optimising?
We have a web application which contains a bunch of content that the system operator can change (e.g. news and events). Occasionally we publish new versions of the software. The software is being tagged and stored in subversion. However, I'm a bit torn on how to best version control the content that may be changed independently. What are some mechanisms that people use to make sure that content is stored and versioned in a way that the site can be recreated or at the very least version controlled?
When you identify two set of files which have their own life cycle (software files on one side, "news and events" on the other, you know that:
you can not versionned them together at the same time
you should not put the same label
You need to save the "news and event" files separatly (either in the VCS or in a DB like Ian Jacobs suggests, or in a CMS - Content Management system), and find a way to link the tow together (an id, a timestamp, a meta-label, ...)
Do not forget you are not only talking about two different set of files in term of life cycle, but also about different set of files in term of their very natures:
Consider the terminology introduced in this SO question "Is asset management a superset of source control" by S.Lott
software files: Infrastructure information, that is "representing the processing of the enterprise information asset". Your code is part of that asset and is managed by a VCS (Version Control System), as part of the Configuration management discipline.
"news and events": Enterprise Information, that is data (not processing); this is often split between Content Managers and Relational Databases.
So not everything should end up in Subversion.
Keep everything in the DB, and give every transaction to the DB a timestamp. that way you can keep standard DB backups and load the site content at whatever date you want if the worst happens.
I suppose part of the answer depends on what CMS you're using, and how your web app is designed, but in general, I'd regard data such as news items or events as "content". In other words, it's not part of your application - it's the data which your application processes.
Of course, there will be versioning issues between your CMS code and your application code. You could manage this by defining the interface between the two. Personally, I'd publish the data to the web app as XML, which gives you the possibility of using XML schema to define exactly what the CMS is required to produce, and what the web app should expect to process.
This ought to mean that most changes in the web app can be made without a corresponding alteration in the rendering of the data. When functionality changes require this, you can create a new version of the schema and continue to make progress. In this scenario, I'd check the schema in with the web app code, but YMMV.
It isn't easy, and it gets more complicated again if you need additional data fields in your CMS. Expect to plan for a fairly complex release process (also depending on how complex your Dev-Test-Acceptance-Production scenario is.)
If you aren't using a CMS, then you should consider it. (Of course, if the operation is very small, it may still fall into the category where doing it by hand is acceptable.) Simply putting raw data into a versioning system doesn't solve the problem - you need to be able to control the format in which your data is published to the web app. Almost certainly this format should be something intended for consumption by software, and therefore not usually suitable for hand-editing by the kind of people who write news items or events.