I've been trying, without success, to set the volume in VLC [2.2.1] via terminal, on Ubuntu.
The parameter --volume doesn't exist anymore (Warning: option --volume no longer exists), and I can't find anything in the help which has "volume" in it.
The documentation (https://wiki.videolan.org/Documentation:Advanced_Use_of_VLC/) is outdated, as it still has the --volume option in it.
Is it still possible?
According to the documentation,
--volume no longer exists but --volume-step and --gain may be used:
--gain <float> audio gain (between 0 and 8)
--volume-step <float> audio output volume step (between 1 and 256)
Note that gain is independent of volume: If you increase it, sound will be louder even though the volume setting will not change.
Tested on MacOS using VLC 3
/Applications/VLC.app/Contents/MacOS/VLC --auhal-volume=256
Will start VLC with the set volume of 256 which corresponds to 100% of current system volume.
Can be set to 512.
The original question was about Ubuntu but I just thought I'd mention in case someone using Windows comes across this question. This is the only setting that seems to work for me on Windows.
--mmdevice-volume=<float [0.000000 .. 1.250000]>
e.g.
--mmdevice-volume=0.5
Found via the exhaustive help list, also mentioned here
Related
I'm new to Ceph and am setting up a small cluster. I've set up five nodes and can see the available drives but I'm unsure on exactly how I can add an OSD and specify the locations for WAL+DB.
Maybe my Google-fu is weak but the only guides I can find refer to ceph-deploy which, as far as I can see, is deprecated. Guides which mention cephadm only mention adding a drive but not specifying the WAL+DB locations.
I want to add HDDs as OSDs and put the WAL and DB onto separate LVs on an SSD. How?!
It seems for the more advanced cases, like using dedicated WAL and/or DB, you have to use the concept of drivegroups
If the version of your Ceph is Octopus(which ceph-deploy is deprecated), I suppose you could try this.
sudo ceph-volume lvm create --bluestore --data /dev/data-device --block.db /dev/db-device
I built Ceph from source codes but I think this method should be supported and you could
try
ceph-volume lvm create --help
to see more parameters.
I am having a problem with a VMWare VM which Centos7 is installed in it.
lsblk command gives something like below
df -h gives this
I am trying to extend root lvm to the partition but I am not able to do it no matter what I tried.
I tried fdisk /dev/sda to create a new partition and extend lvm to this partition but fdisk is getting stuck after partition number.
Some other useful commands give these results just in case they are helpful.
Any help would be appreciated. Thanks in advance.
from your screenshot, your sda2 partition is 199G, sda1 takes 1G, your sda total size is 200G, you cannot make new partition from sda, that's why it stuck there;
and to resolve your issue I provide two options for you to refer,but before operation please make sure you back up all your important data or vm files.
Option 1: this one is just my thoughts,unverified:
from your vgs and pvs, you can see your sda2 only contributes <29G to whole VG(centos), so very simple way of extending your /root is:
1) pvresize /dev/sda2
,after execute, pls run pvs to check whether the pv size increased, if not, stop here
2) vgextend centos /dev/sda2
,after execute,pls check your vgs, see whether the size increased, if so go on to the next
3) lvextend -l 100%FREE /dev/mapper/centos-root
,after this, check lvs, if the root size not increased, go on
4) try:
xfs_growfs /dev/mapper/centos-root
or
resize2fs /dev/mapper/centos-root
Option 2: best practice is to use pvmove, this is strongly recommended for product environment, you can learn from
https://askubuntu.com/questions/161279/how-do-i-move-my-lvm-250-gb-root-partition-to-a-new-120gb-hard-disk
I have executed pvresize /dev/sda2 command and after this I got pvs like below.
After this, I tried to execute vgextend centos /dev/sda2 but I got this error.
However vgs and vgdisplay centos commands are giving something different than before like below.
Using Kubernetes' kubectl I can execute arbitrary commands on any pod such as kubectl exec pod-id-here -c container-id -- malicious_command --steal=creditcards
Should that ever happen, I would need to be able to pull up a log saying who executed the command and what command they executed. This includes if they decided to run something else by simply running /bin/bash and then stealing data through the tty.
How would I see which authenticated user executed the command as well as the command they executed?
Audit logging is not currently offered, but the Kubernetes community is working to get it available in the 1.4 release, which should come around the end of September.
There are 3rd party solutions that can solve the auditing issue, and if you're looking for a PCI compliance as the title implies solutions exist that helps solve the broader problem, and not just auditing.
Here is a link to such a solution by Twistlock. https://info.twistlock.com/guide-to-pci-compliance-for-containers
Disclaimer, I work for Twistlock.
I use Grafana with CollectD (and Graphite) to monitor my network usage on my server.
I use the 'Interface' Plugin of CollectD and display the graphs like this:
alias(scale(nonNegativeDerivative(collectd.graph_host.interface-eth0.if_octets.rx), 0.00000095367431640625), 'download')
When I now initiate a downlad with a speedlimit. The download runs for approx 10 minutes, but only this is shown (green line is the download). So it only shows a peak.
Do I have to use some other metrics? I also tried the 'ethstat' but that has so many options none of which I understand!
Is there any beginners documentation. I only found the CollectD Docs, which I read but that does not say anything what the metrics of the ethstat actually mean.
No, there isn't any beginner documentation about the ethstats metrics meaning in collectd. This is because the ethstat plugin reports statistics collected by ethtool on your system and the ethtool stats are vendor specific.
To point you in the right direction, run ethtool -S eth0
That should show you names and numbers like what collectd is reporting.
Now run ethtool -i eth0 and find your driver info.
Then, google your driver name and find out what statistics your card reports and what they mean. It may involve reading linux driver source code, but don't be too scared of that. What you want is probably in the comments, not the code.
I am working on an embedded linux device that has an internal SD card. This device needs to be updatable without opening the device and taking out the SD card. The goal is to allow users to update their device with a USB flash drive. I would like to completely overwrite the internal SD card with a new SD card image.
My first thought was to unmount the root filesystem and use something to the effect of:
dd if=/mnt/flashdrive/update.img of=/dev/sdcard
However, It appears difficult to actually unmount a root filesystem correctly, as processes like "login" and "systemd" are still using resources on root. As soon as you kill login, for example, the update script is killed as well.
Of course, we could always use dd without unmounting root. This seems rather foolish, however. :P
I was also thinking of modifying the system init script to perform this logic before the system actually mounts the root filesystem.
Is there a correct/easy way to perform this type of update? I would imagine it has been done before.
Thank you!
Re-imaging a mounted file system doesn't sound like a good idea, even if the mount is read-only.
Consider:
Use a ramdisk (initialized from a compressed image) as your actual root filesystem, but perhaps have all but the most essential tools in file systems mounted beneath, which you can drop to upgrade. Most Linux implementations do this early in their boot process anyway before they mount the main disk filesystems: rebooting to do the upgrade may be an option.
SD cards are likely larger than you need anyway. Have two partitions and alternate between them each time you upgrade. Or have a maintenance partition which you boot into to perform upgrades/recovery.
Don't actually image the file system, but instead upgrade individual files.
Try one of or both:
Bring down to single user mode first: telinit 1
or/and
Remount / as readonly: mount -o remount,ro /
before running the dd
Personally I would never do something as you do, but it is possible to do.
Your linux system does it every time it is booted. In fact, what happens is that your kernel initially mounts the initrd, loads all the modules and after that it calls pivot_root to mount the real / .
pivot_root is also a command that can be used from shell, you'd better run man 8 pivot_root but just to give you an idea, you can do something like this
mount /dev/hda1 /new-root
cd /new-root
pivot_root . old-root
exec chroot . sh <dev/console >dev/console 2>&1
umount /old-root
One last thing: this way of performing software upgrade is extremely weak. Please consider other solutions.