I need to scan multiple Linux machines running CentOS and get the systems and monitor details - Make, Model and Serial No. The machines have Nvidia GPU installed. Commands such as dmidecode are able to get a lot of hardware info such as Model, Serial No and Proc Details, but is not able to get me the Monitor Details.
Any directions to help get the Monitor details will be highly appreciated. A google search results into a lot of info on how to get it for Ubuntu - but those methods are not working on CentOS.
Related
I did an experiment by running a python app that is writing 2000 records into mongoDB.
The details of my setup of the experiment as follows:
Test 1: Local PC - Python App running on Local PC with mongoDB on Local PC (baseline)
Test 2: Docker - Python App on Linux Container with mongoDB on Linux Container with persist volume
Test 3: Docker - Python App on Linux Container with mongoDB on Linux Container without persist volume
I’ve generated the result in chart - on average writing data on local PC is about 30 secs. Where else on Docker, it takes about 80plus secs. Hence it seems like writing on Docker is almost 3 times slower than writing on local PC itself.
Should I want to improve the write speed or performance of the mongoDB in docker container, what is the recommended practice? Or should I put the mongoDB as a external volume without docker?
Thank you!
graph
Your system is not consistent in many ways - dynamic storage and CPU performance, other processes, dynamic system settings etc. There are a LOT of underlying things under storage only.
60 sec tests are not enough for anything
Simple operations are not good enough for baseline comparisons
There is ZERO performance impact with storage and CPU in case of containers, there is an impact in networking, but i assume, this is not applicable here
Databases and database management systems must be optimized in special ways, there is no "install and run" approach. We, sysadmins/db admins usually need days to have it running smoothly. Also, performance changes over time.
After couple of weeks of testing and troubleshooting. I finally got the answer and I shall share my findings with the rest of the DevOps or anyone who facing the same issue as me
Correct this statement if needed, Docker Container was started off with Linux, Microsoft join the container bandwagon late and in order to for the container works (with Linux), the DevOps team need to install Linux WSL2 in Windows. And that cost extra overheads which resultant in the process speed.
So to improve the performance speed with containers, the setup should be in Linux OS instead of Windows OS. (and yes the speed reduce drastically)
I am new to kuberenetes.
is it possible to turn every end user machine (PCs, Macs, RPi ... etc) who with full consent downloaded my electron research app that should turn their machines into nodes that ultimately comprise a k8s cluster which then i can run kubeflow.org on to do ML research?
Thanks
Kubernetes relies on some container engine. Usually that's docker, there are efforts to create a common container interface for kubernetes and that's where CRI-O comes in, an abstraction that would allow any container engine to run underneath it.
That being said, containers "don't exist" they are a native abstraction in the linux kernel comprised of cgroups and namespaces and what that means is that the abstraction and isolation doesn't live in the hypervisor (which usually talks to the kernel) as is the case with regular virtual machines, but rather in the actual linux kernel.
MacOS uses its own kernel which, to the extent of my knowledge, doesn't support any sort of containers.
Windows does support containers via Hyper-V and i believe that windows server has a more native built-in support for them. See this link for a better explanation https://learn.microsoft.com/en-us/virtualization/windowscontainers/about/ and also for kubernetes https://kubernetes.io/docs/getting-started-guides/windows/.
As far as Raspberry PI goes there is an ongoing effort that brought k8s to ARM see this link (https://github.com/luxas/kubernetes-on-arm). That being said, you need an entire cluster of raspberry pis to actually make that work, as it would require a lot of resources. One raspberry pi won't get you very far.
How to go about this?
You need linux to run kubernetes. Everywhere.
If you want to create a "giant" kubernetes cluster your best bet is to use a virtualization technology for the PC that is running windows or for the Mac and create virtual machines that you can use as kubernetes nodes.
In short, you create virtual machines where there's no Linux and install kubernetes natively where there is.
Parallels, Veertu or plain Xhyve is a good way of running virtualization on MacOs.
VmWare or VirtualBox are good virtualizations for both windows and mac.
Libvirt and virtualbox are good solutions for linux virtualisation.
Thanks for stopping by!
So I just bought a Pi desktop kit for my RaspberryPi 3B v1.2, which features an add-on module with an mSATA disk slot, real-time clock and power control. I installed the latest raspbian stretch (kernel version 4.9.59-v7+) on the mSATA SSD, and are now booting Raspbian from it with no SD card in the onboard card reader.
A kworker process is now constantly hogging between 8.0-13.5% CPU usage, which I think seems quite unnecessary, and it has annoying consequences, fx lagging videos with Kodi. This has never happened before I added the module.
I then tried perf (inspiration from this thread) by running sudo perf record -D 1000 -g -a sleep 20 and then sudo perf report to figure out which kernel tasks might be responsible:
But I can't figure out how to go on from there to reduce the workload. Could it be caused by the real-time clock embedded in the add-on board as __timer_delay, arch_timer_read_counter_long, and arch_counter_get_cntpct seem to have a high CPU usage? Other tasks with high load are finish_task_switch and _raw_spin_unlock_irqrestore tasks, but I can't guess what that's about.
Am I right that this is unnecessary work load of the CPU and if so, how can I reduce it?
Many thanks in advance!
I had the same issue and found the root cause was that I didn't insert SD card into my Raspberry Pi. When SD card is missing, the kernel frequently tries to scan the SD card slot, which causes high CPU usage.
Download sdtweak.dtbo and replace the existing one under /boot/overlays/ with the new one, then add dtoverlay=sdtweak,poll_once into /boot/config.txt and reboot the machine. It worked for me.
See also: https://github.com/raspberrypi/linux/issues/2567
You can install iotop for looking at load.
For high load /etc/sysctl.conf :
vm.vfs_cache_pressure=500
vm.swappiness=10
vm.dirty_background_ratio=1
vm.dirty_ratio=50
then sysctl --system
I use Grafana with CollectD (and Graphite) to monitor my network usage on my server.
I use the 'Interface' Plugin of CollectD and display the graphs like this:
alias(scale(nonNegativeDerivative(collectd.graph_host.interface-eth0.if_octets.rx), 0.00000095367431640625), 'download')
When I now initiate a downlad with a speedlimit. The download runs for approx 10 minutes, but only this is shown (green line is the download). So it only shows a peak.
Do I have to use some other metrics? I also tried the 'ethstat' but that has so many options none of which I understand!
Is there any beginners documentation. I only found the CollectD Docs, which I read but that does not say anything what the metrics of the ethstat actually mean.
No, there isn't any beginner documentation about the ethstats metrics meaning in collectd. This is because the ethstat plugin reports statistics collected by ethtool on your system and the ethtool stats are vendor specific.
To point you in the right direction, run ethtool -S eth0
That should show you names and numbers like what collectd is reporting.
Now run ethtool -i eth0 and find your driver info.
Then, google your driver name and find out what statistics your card reports and what they mean. It may involve reading linux driver source code, but don't be too scared of that. What you want is probably in the comments, not the code.
Anyone have any success or failure running Jira on a VM?
I am setting up a new source control and defect tracking server. My server room is near full and my services group suggested a VM. I saw that a bunch of people are running SVN on VM (including NCSA). The VM would also free me from hardware problems and give me high availability. Finally, it frees me from some red tape and it can be implemented faster.
So, does anyone know of any reason why I shouldn't put Jira on a VM?
Thanks
We just did the research for this, this is what we found:
If you are planning to have a small number of projects (10-20) with 1,000 to 5,000 issues in total and about 100-200 users, a recent server (2.8+GHz CPU) with 256-512MB of available RAM should cater for your needs.
If you are planning for a greater number of issues and users, adding more memory will help. We have reports that allocating 1GB of RAM to JIRA is sufficient for 100,000 issues.
For reference, Atlassian's JIRA site (http://jira.atlassian.com/) has over 33,000 issues and over 30,000 user accounts. The system runs on a 64bit Quad processor. The server has 4 GB of memory with 1 GB dedicated to JIRA.
For our installation (<10000 issues, <20 concurrent sessions at a time) we use very little server resources (<1GB Ram, running on a quad-core processor we typically use <5% with <30% peak), and VM didn't impact performance in any measurable ammount.
I don't see why you shouldn't run jira off a vm - but jira needs a good amount of resources, and if your vm resides on a heavily loaded machine, it may exhibit poor performance. Why not log a support request (support.atlassian.com) and ask?
We run Jira on a virtual machine - VMWare running Windows Server 2003 SE and storing data on our SQL Server 2000 server. No problems, works well.
My company moved our JIRA instance from a hosted physical server to an Amazon EC2 instance recently, and everything is holding up pretty well. We're using an m1.large instance (64-bit o/s with 4 virtual cores and 8GB RAM), but that's way more than we need just for JIRA; we're also hosting Confluence and our corporate Web site on the same EC2 instance.
Note that we are a relatively small outfit; our JIRA instance has 25 users (with maybe 15 of them active) and about 1000 JIRA issues so far.
We run our JIRA (and other Atlassian apps) instance on Linux-based VM instances. Everything run very nicely.
Disk access speed with JIRA on VM...
http://confluence.atlassian.com/display/JIRA/Testing+Disk+Access+Speed
I'm wondering if the person who is using JIRA with VM (Chris Latta) is running ESX underneath - that may be faster than a windows host.
I have managed to run Jira, Bamboo, and FishEye from a set of virtual machines all hosted from the same server. Although I would not recommend this setup for production in most shops. Jira has fairly low requirements by today's standards. Just be sure you can allow enough resources from your host machine things should run fine.
If, by VM, you mean a virtual instance of an OS, such as an instance of linux running on Xen, VMWare, or even Amazon EC2, then Jira will run just fine. The only time you need to worry about virtual systems is if you're doing something that depends on hardware, such as running graphical 3D apps, or say something that uses a fax modem or a Digium telephony card with Asterisk.