We use RackSpace for our private cloud and we get to spin up RHEL instances all day everyday, which brings me round to the issue of managing the updates on these systems.
I told everyone to use CentOS, until we can get around to managing the licensing, so I can manage the repos, and take control over what packages are available, security updates and the like.
Do I have to use RHN to manage RHEL packages, or can I use an in house system for systems to collect packages from?
/* please excuse the naiveness of this question - personally, I can't see any reason why I can't use "createrepo" but I though I'd just ask the community to make sure */
CentOS patches are usually within 2-3 days of RHEL, so you should be fine. (Security patches - releases are a whole other story!) Depending on your company size, etc, you may want to pick up a few RHEL licenses anyway to help contribute...
Related
So this may seem unusual - but I am using a remote server with linux and gpus. The GPUS have old drivers and I do not have permission to update them. Can I do that within a virtualenv? The goal is then to use CUDA 10 +CUDNN and do all the nice things possible with the recent versions of these software.
virtualenv only concerns Python interpreter and its packages. It cannot handle hardware drivers - you absolutely need root permissions to install a hardware driver. Your best bet would be to contact the administrators of the server and ask them to upgrade. Alternately you can rent your own server, which is not at all as expensive as it used to be (e.g. from AWS).
I have two CentOS platforms. Both run "CentOS release 5.10 (Final)". One is a "real" machine and the other is a VM. Both are 64 bit. Call the real machine Prod and the VM Spare.
When I got this gig I was told that the two machines were identical. Spare is supposed to be a hot spare for Prod. It is now obvious that is not true. The two machines have different yum repo lists. There are duplicate looking install packages from different channels. Prod looks like a server. Spare looks like it had been somebody's desktop with Evolution, OpenOffice and other desktop cruft.
Prod and Spare have similar applications installed but found in different repos so the available yum update levels are different.
I have tried disabling the non-standard repos and uninstalling the non-standard packages. This has led to tears as removing X-Windows, for example, has led to the removal of hundreds of dependant modules that in turn have dependants which, in the end, made Spare deaf, blind and mute. Blessedly we had a copy of the VM.
My latest idea is to migrate both machines to the latest stable CentOS level and basically have a do-over. The downside (I think) is the downtime to the production machine and unknown custom software vs new package level issues.
My basic question is, what is the best way to make the platforms as identical as possible, and minimize (or better yet negate) downtime.
How should we maintain packages and other installs across them into the future? I am aware of Puppet, Chef and CFEngine but have not used them before. Are these the way to go for the future? Something else?
This is not really a programming related question (You might have better luck at https://serverfault.com/)
Your question is quite broad, but essentially you want two machines that are as identical as possible, one production, one VM, correct?
Two get machines in a consistent state, you'll need a configuration tool of some sort. Ansible is probably the easiest to get setup and get cracking with. At it's most basic setup, is basically nice wrappers around SSH. With this you can create consistent, and easily track changes to servers as they happen.
To have a VM you can easily provision, I recommend reading up on Vagrant and Packer. Vagrant to easily create a VM that accurately reflects your production environment, packer so you can repeatedly create an image in various platforms. In an ideal case, you can take the configuration tool and use it to provision your VM, meaning you can test your production changes on a VM first.
In general, having repeatable automated configuration you can easily test, I'd also recommend reading up on the concept of DevOps
i have a question i am bit of a linux user sort of programmer but i couldn't understand is how to develop your own virtual appliance similar to bitnami and turnkey and if there is a way please tell me!!
Take a look at TKLPatch, a simple tool for customizing and extending any of 100+ appliances in the TurnKey Linux library. The resulting patch can be used to generate an ISO that can be installed in a VM or on real hardware.
If you have any questions or need help, feel free to post to the TurnKey forum.
Updated info
Hopefully my necro-posting adds some value...
The new TurnKey build tool is TKLDev. It uses a similar paradigm to TKLPatch, but instead of requiring you to start with an ISO; it builds completely from source.
So long as you can script the install (and there's almost always a way that you can) and it will work on Debian, then you can build yourself a software appliance in a load of different build types (inc. OVA, VMDK, hybrid ISO etc) using TurnKey Linux's TKLDev build engine. The major VM platform that it doesn't (yet) support is Hyper-V but the ISO installs.
My team plans to hire a couple of part time contract programmers and interns soon and I would like to reduce the amount of setup time involved with getting each new intern's dev environment up and running.
Considerations:
They will be working on PC based laptops or desktops running windows with VMWare Server and Ubuntu or just Ubuntu. The compuers may or may not be have identical hardware.
Don't want to spend a ton of money, but enough should be spent to ensure they are not frustrated by slow computers, etc.
The environment includes Ruby on Rails, Git, Passenger, Capistrano, Memcachd.
Any suggestions are welcome. If there is a good way to do this using Apple mac mini's that is something we would consider too.
I'd recommend getting everyone on Ubuntu, and writing a setup script that runs the necessary apt-get install invocations (and possible WGET and dpkg commands) needed to result in the standard environment. Then you simply need to keep a copy of that script available on an internal website, and you can run it on your interns or contractors machines or you can have them run it themselves.
If using Windows, it's slightly harder to do, but you could probably write a BATCH script to install Python and run a Python script to do the remaining setup (I suggest doing that simply because trying to do anything that is in anyway sophisticated in BATCH is a good way to drive oneself insane).
I am going to be moving all my websites to a Windows Web Server 2008 R2 machine. I have installed it in a virtual machine to test that my websites work with it.
I have noticied that there is a program called Web Platform Installer. I have used it to install a few sites but I was just thinking is it a security risk using this? Would it be better for me to manually install the sites (WordPress, Umbraco, etc)
Thanks
We push that out to all our customers just for ease of deployment, and I have not seen any security issues with it, however, I would question its reliability as it fails about 10% of the time (to install whatever I have selected). Having said that, when it does work, its a fairly good tool, as it will install any prerequisites that you may not have been aware of (like SMO, or if you try to install Wordpress without MySQL), and will also keep you up-to-date on newer version of software that you have installed.