VS code showing Error: Session cannot generate requests after every use of catboost with gpu - visual-studio-code

I have been trying to use my Nvidia Geforce GTX 1650 GPU for training catboost regressor.
It worked well but after finish training, it kills the kernel and needs to restart the vs code
Here is the code:-
import pandas as pd
import numpy as np
df = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
from catboost import CatBoostRegressor
cat = CatBoostRegressor(iterations=2000,learning_rate=0.061582,task_type='GPU')
cat.fit(df.drop('loss',axis = 1),df.loss)
This run fine but every time I try to run the next cell it shows this error:
Error: Session cannot generate requests
Error: Session cannot generate requests
at w.executeCodeCell (c:\Users\singh\.vscode\extensions\ms-toolsai.jupyter-2021.8.1236758218\out\client\extension.js:90:327199)
at w.execute (c:\Users\singh\.vscode\extensions\ms-toolsai.jupyter-2021.8.1236758218\out\client\extension.js:90:326520)
at w.start (c:\Users\singh\.vscode\extensions\ms-toolsai.jupyter-2021.8.1236758218\out\client\extension.js:90:322336)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at async t.CellExecutionQueue.executeQueuedCells (c:\Users\singh\.vscode\extensions\ms-toolsai.jupyter-2021.8.1236758218\out\client\extension.js:90:336863)
at async t.CellExecutionQueue.start (c:\Users\singh\.vscode\extensions\ms-toolsai.jupyter-2021.8.1236758218\out\client\extension.js:90:336403)
I have updated all my packages using pip-review, updated jupyter extension, and xgboost with tree_method = 'gpu_hist' is working fine.
Operating System - Windows
Cuda version - 11.2
Nvidia Driver - 462

I had the same issue, I restarted the kernel and VS code and it seems to have fixed the issue.

In my experience, in only means that somewhere in my code there is an 'infinite loop'. The way I solved this was to restart VS Code and checked my code for said "infinite loop" before I rerun it again. I hope this helped...

Related

How to disable Tensorflow js error log for my server host?

As if right now I am in a bit of a rush to get an answer to my problem. The model has been trained and the server works locally with the NN running in the background, but on the server we get the following error message:
2020-04-08 11:54:15.787274: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2799865000 Hz
2020-04-08 11:54:15.787801: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4b5ca00 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-04-08 11:54:15.787830: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
terminate called after throwing an instance of 'std::system_error'
what(): Resource temporarily unavailable
It seems that the server host doesn't like the fact that Tensorflow is trying to log the following error/warning message:
2020-04-08 11:53:42.453164: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to
use: AVX2
Is there a way in JS to disable these error logs so we can run the trained model on the server? Thanks in advance!

Google Ortools - trouble with routing example

I am having an odd issue with Google Ortools vehicle routing example, found here:
https://developers.google.com/optimization/routing/tsp/vehicle_routing
Using Windows 10, and Python 3.6...
When executing the full program code provided in the link above, the program freezes up and exits. Command line provides the following:
WARNING: Logging before InitGoogleLogging() is written to STDERR
F0502 21:33:22.115679 7972 search.cc:2658] Check failed: step > 0 (0 vs. 0)
*** Check failure stack trace: ***
I have honed the code causing the freeze down to this line of code:
assignment = routing.SolveWithParameters(search_parameters)
I am certain I have the library properly installed because other examples of the program have run successfully. I attempted to use Visual Studio, and even went so far as to disable my second GPU.
I am wondering if anyone has encountered this issue and possibly knows how to fix. Thank You.
Issue was solved as follows:
Original model on google's website creates the following variable:
search_parameters = pywrapcp.DefaultRoutingSearchParameters()
I had changed to:
search_parameters = pywrapcp.RoutingModel.DefaultModelParameters()
But the required change should be:
search_parameters = pywrapcp.RoutingModel.DefaultSearchParameters()

crash on the GPU with {inc,set}_subtensor and broadcasting the value

I am fine-tuning vgg16 network with keras 2.0.2 and theano 0.9.0 as backend on Windows10 64bit Anaconda 2 as this blog:https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
I find someone else had the same issue in the pull requests and it was fixed by changing a few lines of code (link: https://github.com/Theano/Theano/pull/2075). However , that's an old version of theano.(the pr was in 2014) . Theano 0.9.0 have already change the code and I still have this problem
every time I run the last line(i.e. model .fit_generator) , it shows that everything works fine until the last of first epoch. That's when exactly GPU always crash
model.fit_generator(
train_generator,
samples_per_epoch=2000,
nb_epoch=50,
validation_data=validation_generator,
nb_val_samples=400)
And here is the error message:
CudaNdarray_CopyFromCudaNdarray: need same dimensions for dim 0,
destination=32, source=16Apply node that caused the error:
GpuIncSubtensor{Set;::, ::, int64:int64:,
int64:int64:}(GpuAlloc{memset_0=True}.0,
GpuElemwise{mul,no_inplace}.0, Constant{1}, Constant{225},
Constant{1}, Constant{225})Toposort index: 143

Metadata.framework error while running root in jupyter

I am running a code in jupyter notebook (here is the code for reference):
import ROOT as root
f = root.TFile("160721_0828.root")
for event in f.tree.events:
print (1)
It should be a simple code, looping through a file. But when I run it, the kernel crushes and I have to restart everything. I also get in terminal many errors of this kind:
2016-08-08 18:25:20.439 atos[99872:272f] Metadata.framework [Error]: couldn't get the client port
0x0000000100000cc4 in start (in python) + 52
before the program crushes. I am using a Mac, version 10.9.5. What could be the cause?
Perhaps your Spotlight indexing is disabled and necessary metadata is inaccessible.
Try to turn Spotlight indexing On and the error message:
"Metadata.framework [Error]: couldn't get the client port"
disappears and kernel will be stable.

Can I register event callbacks using the libvirt Python module with a QEMU backend?

I would like to write some code to monitor events for domains running under QEMU, managed by libvirt. However, trying to register an event handler yields the following error:
>>> import libvirt
>>> conn = libvirt.openReadOnly('qemu:///system')
>>> conn.domainEventRegister(callback, None)
libvir: Remote error : this function is not supported by the connection driver: no event support
("callback" in this case is a stub function that simply prints its arguments.)
The examples I've been able to find regarding libvirt's event handling don't seem to be specific as to which backend hypervisors support which features. Is this expected to work for QEMU backends?
I'm running a Fedora 16 system, which includes libvirt 0.9.6 and qemu-kvm 0.15.1.
For folks finding themselves here via <searchengine>:
UPDATE 2013-10-04
Many months and a few Fedora releases later, the event-test.py code in the libvirt git repository runs correctly on Fedora 19.
Make sure you have registered in the libvirt event loop (or set up your own) before registering for events.
There is a nice example of event handling shipped with the libvirt source (file is called event-test.py). I'm attaching an example based on that code;
import libvirt
import time
import threading
def callback(conn, dom, event, detail, opaque):
print "EVENT: Domain %s(%s) %s %s" % (dom.name(),
dom.ID(),
event,
detail)
eventLoopThread = None
def virEventLoopNativeRun():
while True:
libvirt.virEventRunDefaultImpl()
def virEventLoopNativeStart():
global eventLoopThread
libvirt.virEventRegisterDefaultImpl()
eventLoopThread = threading.Thread(target=virEventLoopNativeRun,
name="libvirtEventLoop")
eventLoopThread.setDaemon(True)
eventLoopThread.start()
if __name__ == '__main__':
virEventLoopNativeStart()
conn = libvirt.openReadOnly('qemu:///system')
conn.domainEventRegister(callback, None)
conn.setKeepAlive(5, 3)
while conn.isAlive() == 1:
time.sleep(1)
Good luck!
//Seto