I'm trying to use the cl-heap library, but when I run
(quicklisp:quickload 'cl-heap)
it returns:
The archive file "cl-heap-0.1.6.tgz" for "cl-heap" is the wrong size: expected 26,979, got 12,288
What can I do to be able to run cl-heap?
I am quite sure that this means that your downloaded file is broken. Maybe the download was interrupted, or your disk is full.
Retry by calling ql:uninstall on the system first, make sure that you have enough disk space and a working network connection, then ql:quickload again.
Related
We're using GCS for our archive backup and I was curious what people thought was better for the initial update - rsync or cp?
I've gotten hung up twice (once on a non-unicode character and again on what seemed like a long path) and would like to be able to pick up where I left off.
Any advice would be appreciated!
(and if this is a bad question, can someone tell me exactly why its bad or how to fix it? it seems I suck at asking questions here!)
rsync is better suited for doing archives/backups, for the reason you hinted at - if you started uploading data and then encountered a problem partway through, restarting a cp would cause you to re-upload files that were already successfully uploaded, while rsync would only upload files that weren't uploaded (or that changed since the last upload). Moreover, if some of the source files were deleted since you last started uploading, rsync will remove them from the destination bucket, making the destination content match the source content.
I have two drives A and B. Using a python script I am creating some files in "A" drive and I am running a powerscript which copies all the files in the drive A to drive B in the interval of 1 sec.
I am getting this error in my powershell.
2015/03/10 23:55:35 ERROR 32 (0x00000020) Time-Stamping Destination
File \x.x.x.x\share1\source\ Dummy_100.txt The process cannot access
the file because it is being used by another process. Waiting 30
seconds...
How will I overcome this error?
This happened is because the file is locked by running process. To fix this, download Process Explorer. Then use Find>Find Handle or DLL, find out which process locked this file. Use 'taskkill' to kill that process in commandline. You will be fine.
if you want to skip this files you can use /r:n that n is times of tries
for example /w:3 /r:5 will try 5 time every 3 seconds
How will I overcome this error?
If backup is, what you got in mind, and you encounter in-use files frequently, you look into Volume Shadow Copies (VSS), which allow to copy files despite them being ‘in use’. It's not a product, but a windows technology used by various backup tool.
Sadly, it's not built into robocopy, but can be used in conjunction with it. See
➝ https://superuser.com/a/602833/75914
and especially:
➝ https://github.com/candera/shadowspawn
It could be many reasons.
In my case, I was running a CMD script to copy from one server to another, a heap of SQL Server backups and transaction logs. I too had the same problem because it was trying to write into a log file that was supposedly opened by another process. It was not.
I ran many IP checks and Process ID checkers that I ran out of knowing what was hogging the log file. Event viewer said nothing.
I found out it was not even the log file that was being locked. I was able to delete it by logging into the server as a normal user with no admin privileges!
It was the backup files themselves by the SQL Server Agent. Like #Oseack said, there may have been the need to use another tool whilst the backup files themselves were still being used or locked by the SQL Server Agent.
The way I got around it was to force ROBOCOPY to wait.
/W:5
did it.
I finally managed to automate our release process using Desired State Configuration with the Azure PowerShell SDK methods, in particular the Publish-AzureVMDscConfiguration -> Set-AzureVMDscExtension -> Update-AzureVM combo.
After thinking for a while in a way to send my build outputs somewhere accessible by the VM, I ended up with the strategy of appending my build drops in the configuration package that gets uploaded to Azure Storage.
My problem now is that as soon as the PowerShell DSC Extension in the VM starts downloading that package, it's memory consumption goes through the roof. When I open task manager, I can see the newly created PowerShell process going from 30 or so megabytes, to 300, and then to 1.3GB, completely ruining my VM.
Yesterday afternoon, I left work and let it processing, but when I logged into the VM today, the inner zip file, containing my build outputs, had 0 bytes in the DSCWork folder. My problem is that even if it worked in the end, it is taking a very long time and making my VM useless... I can't even change between windows in remote access, since the machine is completely stuck at 100% RAM usage.
Why is PowerShell taking so much memory and time to download my configuration package? It only has 60MB zipped, and roughly 200MB unzipped. Is there something I can do to prevent that from happening?
UPDATE:
I tested it just now and it finally finished correctly. Took more than a full hour, but the files are there... This is unacceptable though.
This issue should be resolved in the next iteration of the extension. In the meanwhile you may want to consider uploading your build content to a blob separate from your configuration ZIP package (you can use Set-AzureStorageBlobContent for this).
Then you can use either the remote file or script resources in your original configuration to download the blob. Be sure to add the appropriate dependencies in your configuration so that the blob gets downloaded before you use it.
configuration DownloadSample
{
Import-DscResource -Module xPSDesiredStateConfiguration
xRemoteFile Download
{
Uri = 'https://....blob.core.windows.net/windows-powershell-dsc/foo.zip?sv=...'
DestinationPath = 'd:\tmp\download.zip'
}
}
I am learning to write character device drivers from the Kernel Module Programming Guide, and used mknod to create a node in /dev to talk to my driver.
However, I cannot find any obvious way to remove it, after checking the manpage and observing that rmnod is a non-existent command.
What is the correct way to reverse the effect of mknod, and safely remove the node created in /dev?
The correct command is just rm :)
A device node created by mknod is just a file that contains a device major and minor number. When you access that file the first time, Linux looks for a driver that advertises that major/minor and loads it. Your driver then handles all I/O with that file.
When you delete a device node, the usual Un*x file behavior aplies: Linux will wait until there are no more references to the file and then it will be deleted from disk.
Your driver doesn't really notice anything of this. Linux does not automatically unload modules. Your driver wil simply no longer receive requests to do anything. But it will be ready in case anybody recreates the device node.
You are probably looking for a function rather than a command. unlink() is the answer. unlink() will remove the file/special file if no process has the file open. If any processes have the file open, then the file will remain until the last file descriptor referring to it is closed. Read more here: http://man7.org/linux/man-pages/man2/unlink.2.html
I came across this error that is apparently pretty common among Linux Systems.
"Too many files Open"
In my code I tried to set the Python open file limit to unlimited and it threw an error saying that I could not exceed the system limit.
import resource
try:
resource.setrlimit(resource.RLIMIT_NOFILE, (500,-1))
except Exception as err:
print err
pass
So...I Googled around a bit and followed this tutorial.
However, I set everything to 9999999 which I thought would be as close to unlimited as I could get. Now I cannot open a session as root on that machine. I can't login as root at all and am pretty much stuck. What can I do to get this machine working again? I need to be able to login as root! I am running Centos 6 and it's as up to date as possible.
Did you try turning it off and on?
If this doesn't help you can supply init=/bin/bash as kernel boot parameter to enter a root shell. Or boot from a live cd and revert your changes.
After performing an 'strace su -', I looked for the 'No such file or directory' error. When comparing the output, I found that some of those errors are ok, however, there were other files missing on my problem system that existed on a comparison system. Ultimately, it led me to a faulty line in /etc/pam.d/system-auth-ac referencing an invalid shared object.
So, my recommendation is to go through your /etc/pam.d config files and validate the existence of the shared object libraries, or, look in /var/log/secure and it should give some clue to missing shared objects as well.