I wrote a c program running on Centos 7, every 1 sec, it generate around 1k files. Another program move the files to other folder (but same partition, /home partition), read, process and delete it. After a few hours, /home partition is very slow, even stop both file generation program and file process program. If reboot the server, can ping the server, but cannot ssh in any more. The server go back to normal state only after hard reset.
For file generation c program, I always open, write and close the file, file process is written in java. Is it because any leakage of my program? but why /home partition very slow even after stop the program, only return back normal state after hard reset server. Or OS problem? What different between hardware reset and reboot? Why does it not recover after reboot?
The file system problem fixed after upgrade os to centos 7.3
Related
I use VS Code latest version 1.69.2 , and remote connect to my cloud VM. After one or two days, I found the cpu usage is very high .The detail of the process is :
my-user 18954 17082 0 12:10 ? 00:00:04 /home/my-user/work/.vscode-server/bin/3b889b090b5ad5793f524b5d1d39fda662b96a2a/node /home/my-user/work/.vscode-server/bin/3b889b090b5ad5793f524b5d1d39fda662b96a2a/out/bootstrap-fork --type=extensionHost --transformURIs --useHostProxy=false
There are 8 node processes in total,every cpu usage is large than 50%
The question is:
What process is this?
Why the cpu usage is so high?
When I close all the connected windows and then reconnect to my remote VM, these processes are still here. Why are these processes not closing automatically?
Is this the bug of VSCode 1.69.2?
VSCode uses Node, I suppose it's one of these process used for autocomplete that scans your files, but it's never ending. I have to kill them manually, 1 node process uses 100% of the CPU.
I solved the issue by removing the .vscode-server folder created in my home folder. You have to do this from a remote shell (not from vscode terminal).
I solved the issue by removing the Settings Sync plugin。
New to MDT.
So I am following through the MS step by step guides:
https://learn.microsoft.com/en-us/windows/deployment/windows-10-poc
https://learn.microsoft.com/en-us/windows/deployment/windows-10-poc-mdt
I am at step 28 in (in the second guide):
Deploy Windows 10 in a test lab using Microsoft Deployment Toolkit
Where the deployment wizard has been launched in a VM on the host system and have watched the process continue for an hour. It finally finishes but it does not create the .wim on the the server share as
expected and as referred to in the bootstrap.ini:
Bootstrap.ini
[Settings]
Priority=Default
[Default]
DeployRoot=\\SRV1\MDTBuildLab$
UserDomain=CONTOSO
UserID=MDT_BA
UserPassword=pass#word1
SkipBDDWelcome=YES
I have verified that the share "DeployRoot" exists and can be connected to using the provided credentials and that the share has the correct permissions to create/delete files.
Not sure what I'm missing but my expectation was a .wim should have been created in \srv1\MDTBuildLab$\Captures but there is nothing in that folder.
Just before stopping the deployment wizard reboots several times in quick succession, which to me doesn't appear correct but as I have never witnessed a successful capture I can't say for sure this isn't what's supposed to happen.
I'm not even sure where I can view any log files to figure out why it fails.
Any assistance appreciated!
Further Info:
Activated monitoring. It gets to step 86 of 93. The last thing I see is "Applying WinPE (BD)" or something similar and then it restarts. Then several quick reboots occur (the loading bar appears for a second or two and then reboots) (Which I think are failing) finally it gives up! The process never completes!
When I attempt to mount the client REFW10X64-001.vhdx to check the logs I am greeted with this message
The disk image isn't initialized, contains partitions that aren't recognizable, or contains volumes that haven't been assigned drive letters. Please use the Disk Management snap-in to make sure that the disk, partitions, and volumes are in a usable state.
So it looks like the last step totally screwed the disk! Which would explain the last several boots failing to load anything.
So no errors no warnings, no logs, no finish and no wim generated.
How do I troubleshoot this?
I know this post is old, but the normal behavior would be as follows:
Using the boot image, you boot into WinPE
The task sequence is started and the OS gets applied to the disk
Reboot
Boot into full Windows where the task sequence also continues
Under full Windows, one of the last steps is that WinPE gets applied again
Reboot
Computer boots automatically into WinPE
The wim file gets created (WinPE is running on the RAM disk and the regular C: drive (and any additional drives) is being mirrored into the wim file)
Computer performs the FINISHACTION.
We would need at least BDD.log and smsts.log to further troubleshoot. My guess is that WinPE was not applied correctly.
One of the mongo nodes in the replica set went down today. I couldn't find what happened but when i checked the logs on the server, I saw this message 'mongod main process killed by KILL signal'. I tried googling for more information but failed. Basically i like to know what is KILL signal, who triggered it and possible causes/fixes.
Mongo version 3.2.10 on Ubuntu.
The KILL signal means that the app will be killed instantly and there is no chance left for the process to exit cleanly. It is issued by the system when something goes very wrong.
If this is the only log left, it was killed abruptly. Probably this means that your system ran out of memory (I've had this problem with other processes before). You could check if swap is configured on your machine (by using swapon -s), but perhaps you should consider adding more memory to your server, because swap would be just for it not to break, as it is very slow.
Another thing worth looking at is the free disk space left and the syslog (/var/log/syslog)
I am installing ubuntu 14.04 on an acer machine and I realize that the OS can't initialize if the booting files are lost.
I would really appreciate if somebody could bring information about how these files work.
Thank you very much.
There are several stages of booting in GRUB, each of them uses differect file(s)
Stage 1: boot.img is stored in the master boot record (MBR), or optionally in any of the volume boot records (VBRs), and addresses the next stage. At installation time it is configured to load the first sector of core.img.
Stage 2: core.img is by default written to the sectors between the MBR and the first partition, when these sectors are free and available. Once executed, core.img will load its configuration file and any other modules needed, particularly file system drivers; at installation time, it is generated from diskboot.img and configured to load the stage 3 by its file path.
_
This is a little piece of info, for full information check Wikipedia
Looking for any advice I can get.
I have 16 virtual CPUs all writing to a single remote MongoDB server. The machine that's being written to is a 64-bit machine with 32GB RAM, running Windows Server 2008 R2. After a certain amount of time, all the CPUs stop cold (no gradual performance reduction), and any attempt to get a Remote Desktop Connection hangs.
I'm writing from Python via pymongo, and the insert statement is "[collection].insert([document], safe=True)"
I decided to more actively monitor my server as the distributed write job progressed, remoting in from time to time and checking the Task Manager. What I see is a steady memory creep, from 0.0GB all the way up to 29.9GB, in a fairly linear fashion. My leading theory is therefore that my writes are filling up the memory and eventually overwhelming the machine.
Am I missing something really basic? I'm new to MongoDB, but I remember that when writing to a MySQL database, inserts are typically followed by commits, where it's the commit statement that actually makes sure the record is written. Here I'm not doing any commits...?
Thanks,
Dave
Try it with journaling turned off and see if the problem remains.