I have a PC that usually used for Yocto image building. Now I need to add ROS2 packages to the same image. After all it's turned out the disk is full so I've connected a SSD external disk to build the image on it. I did the same steps as before, run the same command etc. but after the build starts if crashed at the first package. I've reinstalled all the sources from zero, I've deleted tmp and ssstate-cache but nothing help. I don't understand what this error says.
This is error trace log
As I see Yocto fail to write something into ssstate-cache/61, I don't really know what that is. A user has read/write permissions.
The build system: Ubuntu 20.04
Yocto version: zeus
In the linked error log, the relevant part is:
SignatureGeneratorOEBasicHash.dump_sigtask(fn='/media/sw/Samsung/yocto/sources/poky/meta/recipes-extended/texinfo-dummy-native/texinfo-dummy-native.bb', task='do_fetch', stampbase='/media/sw/Samsung/yocto/build-xwayland/sstate-cache/61/sstate:texinfo-dummy-native::1.0:r0::3:610ed4b8e8bf78bbcd4a667b6645a0276f5c8bfce5de4822923850d44d032bbe_fetch.tgz.siginfo', runtime='customfile:/media/sw/Samsung/yocto/build-xwayland/tmp/stamps/x86_64-linux/texinfo-dummy-native/1.0-r0'):
os.chmod(tmpfile, 0o664)
> os.rename(tmpfile, sigfile)
except (OSError, IOError) as err:
OSError: [Errno 22] Invalid argument: '/media/sw/Samsung/yocto/build-xwayland/sstate-cache/61/sigtask.twkjztl9' -> '/media/sw/Samsung/yocto/build-xwayland/sstate-cache/61/sstate:texinfo-dummy-native::1.0:r0::3:610ed4b8e8bf78bbcd4a667b6645a0276f5c8bfce5de4822923850d44d032bbe_fetch.tgz.siginfo'
It is likely that the new name is not valid for the target disk filesystem. Typically the : character is invalid on FAT/NTFS filesystems. Native Linux filesystems like Ext4, XFS and Btrfs will not have this limitation.
Related
I have ubuntu version 18.04.3 running on virtual box.
I have been trying to install pintos on qemu, but when I run ./pintos run alarm-multiple my qemu is stuck on loading.
I am getting the following output:
**WARNING: Image format was not specified for '/tmp/5XpQ2ee16J.dsk' and probing guessed raw.
Automatically detecting the format is dangerous for raw images, write
operations on block 0 will be restricted.Specify the 'raw' format
explicitly to remove the restrictions.
qemu-system-x86_64: warning: TCG doesn't support requested feature:
CPUID.01H:ECX.vmx [bit 5]
PiLo hda1
Loading............
Kernel command line: run alarm-multiple
Pintos booting with**
Nothing is coming after pintos booting with
You are using an old version of pintos.
Use the latest version available here.
git://pintos-os.org/pintos-anon
I am aware this question is very specific. Nontheless, maybe s.o. can help:
I was trying to compile an open-source code today. (anyone, who's interested, that's the one.) The error message described below occurs after oai_hss -j $PREFIX/hss_rel14.json --onlyloadkey - having followed the step-by-step installation guide to this point.
After typing the aforementioned command in my terminal, the following error is thrown:
terminate called after throwing an instance of 'spdlog::spdlog_ex'
what(): Failed opening file logs/hss.log for writing: No such file or directory
Aborted (core dumped)
Allright, this sounds pretty severy (core dumped). I was searching google for a meaning of that error message. I came across this other github project. Apparently the spdlog class is trying to enable logging from wherever I run my program. And it throws an spdlog_ex error whenever the file it is trying to add to the registry (in this case logs/hss.log) already exists within this registry. So, I guess, the solution to my problem would be to find this registry and delete logs/hss.log. Does this make sense?
Question: Where the heck do I find this registry?
Maybe some background knowledge would be useful: I am trying to compile the open-source code within a VM that is running Ubuntu 18.04.3 LTS bionic with a 4.15.0-66-generic kernel.
I was searching the /tmp directory for a log folder already. There is none. Where else could it be?
open this file
sudo nano /usr/local/etc/oai/hss_rel14.json
you will see some config where you can find logs/hss.log
actually you have to change these 4 value to
logname: "/var/log/hss.log"
statlogname: "/var/log/hss_stat.log"
auditlogname: "/var/log/hss_audit.log"
ossfile: "~/openair-cn/etc/oss.json"
then use sudo touch to create these files
sudo touch /var/log/hss.log
sudo touch /var/log/hss_stat.log
sudo touch /var/log/hss_audit.log
for logname, statlogname, and auditlogname you can change it to whatever file you want but i like to put them together in /var/log folder.
for ossfile , the oss.json is actually in there.
hope this help
I have once before mounted this same database, so I am confident that I have the correct credentials.
During the last session that I had it mounted I was experimenting with my queries, visuals etc. and the session all of a sudden crashed.
Then when I reloaded slamdata, the mount for my database was gone.
Obviously I then tried to remount the same database with the same credentials in order to continue my work. However when I did this I got an error:
There was a problem saving the mount: An unknown error ocurred: 500 ""
And then there is a never ending spin wheel that sits on the mount button. I can leave this pop up and go to the original screen, but nothing occurs. And then if I try to remount again the same error occurs.
I have verified that I can still access my db and collections using robomongo. So if anyone knows what this error message refers to please let me know! I have yet to find its meaning online.
Note: I have already tried uninstalling and reinstalling/ restarted my computer.
In SlamData 4.2.1 this bug has been identified and fixed an issue with the MongoDB connecter that would corrupt the metastore if you use the _id field in a query. The fix is available in the SlamData 4.2.2 release soon
Below is the fix:
Delete the current metastore. Below is the location of this file for each supported operating system:
Mac OS:
$HOME/Library/Application Support/quasar/quasar-metastore.db.mv.db
Microsoft Windows:
%HOMEDIR%\AppData\Local\quasar\quasar-metastore.db.mv.db
Linux (various vendors):
$HOME/.config/quasar/quasar-metastore.db.mv.db
Open a terminal and switch to the location that you stored SlamData into. You should find a quasar-web.jar file in the following location based on your installed operating system based on default installation paths:
Mac OS:
/Applications/SlamData 4.2.1.app/Contents/java/app/quasar-web.jar
Microsoft Windows:
C:\Program Files (x86)\slamdata 4.2.1\quasar-web.jar
Linux (various vendors):
$HOME/SlamData 4.2.1/quasar-web.jar
Run the following command in a terminal:
java -jar quasar-web.jar initUpdateMetaStore
This will rebuild your metastore. Once complete it will return you to your operating system prompt.
Rerun the SlamData application as you normally would
Remount your database
At this point in time you can access your saved workspaces.
NOTE: You will not want to open the workspace you were using that caused this issue as it will cause the same problem.
I compiled a (lm75) driver as a module to insert at run-time and and when tried to perform below
#insmod ./lm75.ko
I got the output as
Error: Driver 'lm75' is already registered aborting...
insmod: can't insert './lm75.ko': Device or resource busy
So, tried removing the same from kernel as below
#rmmod lm75.ko
which outputted
rmmod: can't unload module 'lm75': No such file or directory
let me know if I'm missing something else?
I'm using a script to run commands in u-boot which in turn is loading images (uImage, rootfs, dtb) from predefined locations in the MMC where as the recent version uImage is in wrong location (my fault). Hence, the uImage and rootfs loaded are different, whereas uImage has LM75.KO inserted (as it is a old image where LM75 is compiled as an built-in driver) and rootfs has no info about the LM75 (as it is latest one, in which LM75 is compiled as an kernel module). When, replaced with correct images the insmod and rmmod worked as expected. Hope this helps people like me :)
My source is an old external hard drive formatted HFS+ (used to be data from server running 10.4.11) connected to an iMac running 10.8.5 with an upgraded version of rsync 3.0.9.
The destination is a Centos 6.4 server running rsync 3.0.9 as well.
We have tried to transfer a FONTS folder (Source size = 4.7GB) to the destination but the size of the folder is not kept (Destination size = 655MB).
Below is the command that I run to preserve hard links - ACLs.....
/usr/local/bin/rsync -aHpEXotg --devices --specials --ignore-errors /Users/london/Desktop/FONTS root#192.168.200.253:/home/TEST
Also getting errors: rsync: rsync_xal_set: lsetxattr(""/home/TEST/FONTS/ Folder/Kfz-EURO Schrift MAC+PC/MAC/FE Mittelschrift.image"","user.com.apple.FinderInfo") failed: Operation not supported (95)
Most of the files are showing as Unix files and can't be open.
This issue has been time consuming so if someone can guide me.
Thanks..
Ran across this today as I encountered similar errors. I ended up running rsync with minimum options to complete the copy:
rsync -r --progress /path/to/source /path/to/destination
-r is recursive
--progress shows additional copy info (versus v for verbose output)
If you leave out --progress, rsync will only show you files that error and will transfer the rest - that can be useful to know which files you're not getting if there aren't very many with errors. Course, alternatively if there are a lot of errors, that can indicate bad sectors on the drive.