I create a instance with CentOS-7-x86_64-GenericCloud.qcow2. And then, i listed the modules with the 'chkconfig' command. In this list cloud-init module and other modules are on. When cloud-init is on, i tried to reboot the instance. It takes almost 5 min. It is too long. How can i resolve this problem?
It could be a network problem that prevents cloud-init to execute correctly. You can check it analysing cloud-init logs at /var/log/cloud-init.log and /var/log/cloud-init-output.log.
Which hypervisor/cloud are you working with?
Its hard to diagnose your problem with the limited information here. There are a lot of recent changes in cloud-init with regard to OpenStack and CentOS. We've recently added a copr repository for delivering cloud-init rpm builds from trunk.
If you are still having issues, please file a bug in launchpad.
Follow the instructions there on what information to provide.
Feel free to ask for help in irc on Freenode #cloud-init.
Related
I am trying to build custom linux image using yocto. The set up is;
Ubuntu 20.04 on Oracle Virtual Machine
Yocto release dunfell
It gives this error
NOTE: Exit code 127. Output:
/home/user234/yocto-project/image/build/tmp/work/qemux86-poky-linux/core-image-minimal/1.0-r0/intercept_scripts-1bf6a9721f164ba99b8a07325a9c3fe0f21a700fea55c8351519b59cf02d0aca/update_desktop_database:
7: update-desktop-database: not found
ERROR: The postinstall intercept hook 'update_desktop_database' failed, details in /home/user234/yocto-project/image/build/tmp/work/qemux86-poky-linux/core-image-minimal/1.0-r0/temp/log.do_rootfs
The problem only occures in Virtual Machine Environment, It works fine with native linux environment on my other machine.
I have installed the desktop-file-utils and I can run from my shell manually. Somehow, bitbake is unable to detect it. Does someone know the solution?
You can resolve this issue immediately by patching poky sources to skip execution of "update_desktop_database" in this path:
poky/scripts/postinst-intercepts/update_desktop_database
Just comment the line.
It may happen again for other scripts. Just do the same for the other scripts. Finally, the do_rootfs task will complete successfully.
I had a similar problem (without virtual machine, you will see that this part is unrelated). The issue is with file permissions of the Yocto sources, or part of them. In my particular case the DevOps have set the POSIX file permissions on the entire Yocto source directory to 777, i.e. -rwxrwxrwx. Don't ask me why.
In the OP's case it seems like the Yocto sources may have been copied to the virtual machine with a help of some permission-less file system, like FAT32, which results in the same outcome. A good example is copying the sources with a USB flash drive formatted with FAT32. YMMV, this may probably be relevant for ACLs as well, I haven't tried to fiddle with that.
In my opinion, this should be considered as a bug in bitbake. If the source files permissions can cause such an unpredictable behavior, they have to be verified before or during the build, and the user must be informed by generating an error.
This is clearly irrelevant to the OP by now, hope somebody finds it useful.
One of a very kind volunteers helped to get M2Crypto almost to the shape that it builds on Windows. We use Appveyor CI for testing (I am a Linux guy, so I don't even have an access to a Windows machine), everything works well, when it does, but it is quite unreliable. M2Crypto is using swig and downloading it for every job seems quite unreliable. Any ideas how to make choco more reliable?
Or, would it be possible to restart just one job (not whole commit), so that when this happens, I could get passing commit with restarting a job?
Thank you for great service.
I would recommend Caching chocolatey packages.
Restarting a single job in the matrix is still not implemented.
I am trying to deploy apps from my dev machine using capistrano and rsync.
I have studied that rsync is used for backup and only copies bytes that are changed in the file. But to fit it as capistrano task.
A sample deploy code with rsync, with explanation would be be greatly helpful.
Thanks
Check if this following links help you understand the code samples:-
http://philtoland.com/post/448916606/capistrano-deployment-using-rsync
have you checked this :-
https://github.com/vigetlabs/capistrano_rsync_with_remote_cache
If you soon end up using Capistrano v3 (v3.0.0pre14 as of right now), the good old Rsync support gem capistrano_rsync_with_remote_cache won't work. I recently created the spiritual success to that called Capistrano::Rsync which you might want to try.
Recently I had a problem in a costumer's computer. Our installer would hang during install and uninstall. Eventually I found out that the winmgmt service wasn't running, and that was causing the problem. For some reason it was disabled.
I would like to add a check to our installer, to guarantee that the service is running when installation begins. Preferably, with a helful error message if it isn't running.
I know I can do this check with a custom action, calling QueryServiceStatusEx from a C program. It can probably be done in some way in VBS too. But i would like to avoid custom actions, if possible. We had some problems with antiviruses, and dependency with WSH.
So, in short:
How can I check if a service is running, in WiX?
(I don't have much experience with WiX. The guy who wrote the installer left the company and now I do the maintenance)
Thanks!
There is nothing built into the Windows Installer to check the status of a service. You will need a CustomAction. As you've found script CustomActions should not be used, see: http://blogs.msdn.com/robmen/archive/2004/05/20/136530.aspx
Anyone have suggestions for deployment methods for Perl modules to a share nothing cluster?
Our current method is very manual.
Take down half the cluster
Copy Perl modules ( CPAN style modules ) to downed cluster members
ssh to each member and run perl Makefile.pl; make ; make install on each module to be installed
Confirm deployment
In service the newly deployed cluster members, out of service the old cluster members and repeat steps 2 -> 4
This is obviously far from optimal, anyone have or know of good tool chains for deploying Perl modules to a shared nothing cluster?
Take one node offline, install Perl, and then use it to reimage the other nodes.
At least, that's how I imagine you'd want to install software in a shared-nothing cluster. Perl is just the application you happen to be installing.
Assuming all the machines are identical, you should be able to keep one canonical installation, and use rsync or something to keep the others in updated.
I have, in the past, developed a Perl program which used the Expect module (from CPAN) to automate basically the process you described, automatically sshing to each host, copying any necessary files, and performing the installations. Unfortunately, this was developed on-site for a client, so I do not have access to the code to share. If you're familiar with Expect, it shouldn't be too difficult to set up, though.
We currently have a clustered Perl application that does data processing. We also have numerous CPAN modules and modules that we've developed that the software depends on. When you say 'shared nothing', I'm assuming you're referring to things like NFS mounts.
If the machines have identical configurations, then you may be able to build your entire application into a single directory structure (eg: /opt/my-app), tar it up and that could be come the only thing you need to push to the boxes.
As far as deploying it to the boxes, you might be able to use Capistrano. We developed a couple of our own cluster utilities that piggybacked off of ssh - I've released one form of that utility: parallel-jobs. Its README shows an example of executing multiple parallel ssh commands. It's a small step to extend that program to be able to know about your cluster and then be able to execute the same command across the cluster (as opposed to a series of different commands).
If you are using Debian or Ubunto OS you could package your Perl modules - I have open sourced some code to help with this: Perl module builder it's still very rough but does work and can be made to work on your own code as well as CPAN modules, this then makes deployment much easier.
There is also project to get RedHat rpms for all of CPAN, Dave Cross gave a talk Perl in RPM-Land which may be of use.
If you are on some other system which doesn't have packaging then the rsync option (install on one machine and then rsync to the others) should work as well, note you can mount a windows share and rsync to it across unix if needed.
Using a central manager like Puppet makes creating and maintaining machines in a cluster a lot easier to manage, from installing code to managing users and email configuration. There is also a Perl project in the pipeline to do something similar but this has not been made public yet.
Capistrano is a tool that allows you to run commands on a group of servers; it is perfectly suited to making your task considerably easier.
Further down the line of automation, but also complexity, is Puppet that allows you do define a group of servers, give them roles and then push out sets of code to every machine subscribing to a certain role.
I am not sure exactly what a share nothing cluster is, but if it uses some base *nix system like Fedora, Mandriva, or Ubuntu. Many of the perl modules are precompiled for specific architectures. You can easily run these.
If these systems are of the same arch you can do as someone else said and just copy the compiled modules from system to system, just make sure you have all of the dependancies as well on the recipient system.