Python - Integration of multiprocessing with classes - class

I have a problem in Python with the module multiprocessing.
Basically, I'm making a game (with tkinter as graphics)  in which I have a class Game and several classes (entites) that all have an update(self) method.
So it's a bit like:
class Game:
  __init__(self, etc...):
    self.entities = []
  gameloop(self):
     for entity in self.entities:
       entity.update
class EntityExample:
  __init__(self, game, etc...):
    self.game = game
  update(self):
    #stuff
   
And then I do:
game = Game()
game.entities.append(EntityExample())
game.gameloop()
So I tried, to optimize the code, to do a thing like that:
import multiprocessing
class Game:
  __init__(self, etc...):
    self.entities = []
    self.threads = []
    self.lock = multiprocessing.Lock()
  gameloop(self):
     for entity in self.entities:
       entity.update
class EntityExample:
  __init__(self, game, etc...):
    self.game = game
  update(self):
    self.game.lock.acquire()
    #stuff
    self.game.lock.release()
And in gameloop:
for entity in entities:
  t = multiprocessing.Process(target=entity.update)
  t.start()
  t.join
  self.threads.append(t)
The goal was to do the calculations on different cores at the same time to improve performance, but it doesn't work sadly.
I also asks to kill the program in IDLE: "The program is still running. Do you want to kill it?".
Thanks in advance,
Talesseed
P.S. : the classes are not picklable
P.P.S. : I've read that create a new Process copies the code inside the file to a new thread, and that could be a problem because my code is ~1600 lines long.

I found something interesting. Apparently, running it through the console make it work. But I've done some testing and the multithreaded version is in fact slower that the single threaded version. I've no clue :/
EDIT: Nevermind, it works now, my bad.

Related

Running blenderbot-3B model locally does not provide same result as on Inference API

I tried the facebook/blenderbot-3B model using the Hosted Inference API and it works pretty well (https://huggingface.co/facebook/blenderbot-3B). Now I tried to use it locally with the Python script shown below. The created responses are much worse than from the inference API and do not make sense most of the time.
Is a different code used for the inference API or did I make a mistake?
from transformers import TFAutoModelForCausalLM, AutoTokenizer, BlenderbotTokenizer, TFBlenderbotForConditionalGeneration, TFT5ForConditionalGeneration, BlenderbotTokenizer, BlenderbotForConditionalGeneration
import tensorflow as tf
import torch
device = "cuda:0" if torch.cuda.is_available() else "cpu"
chat_bots = {
'BlenderBot': [BlenderbotTokenizer.from_pretrained("hyunwoongko/blenderbot-9B"), BlenderbotForConditionalGeneration.from_pretrained("hyunwoongko/blenderbot-9B").to(device)],
}
key = 'BlenderBot'
tokenizer, model = chat_bots[key]
for step in range(100):
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt').to(device)
if step > 0:
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1)
else:
bot_input_ids = new_user_input_ids
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id).to(device)
print("Bot: ", tokenizer.batch_decode(chat_history_ids, skip_special_tokens=True)[0])

Function in pytest file works only with hard coded values

I have the below test_dss.py file which is used for pytest:
import dataikuapi
import pytest
def setup_list():
client = dataikuapi.DSSClient("{DSS_URL}", "{APY_KEY}")
client._session.verify = False
project = client.get_project("{DSS_PROJECT}")
# Check that there is at least one scenario TEST_XXXXX & that all test scenarios pass
scenarios = project.list_scenarios()
scenarios_filter = [obj for obj in scenarios if obj["name"].startswith("TEST")]
return scenarios_filter
def test_check_scenario_exist():
assert len(setup_list()) > 0, "You need at least one test scenario (name starts with 'TEST_')"
#pytest.mark.parametrize("scenario", setup_list())
def test_scenario_run(scenario, params):
client = dataikuapi.DSSClient(params['host'], params['api'])
client._session.verify = False
project = client.get_project(params['project'])
scenario_id = scenario["id"]
print("Executing scenario ", scenario["name"])
scenario_result = project.get_scenario(scenario_id).run_and_wait()
assert scenario_result.get_details()["scenarioRun"]["result"]["outcome"] == "SUCCESS", "test " + scenario[
"name"] + " failed"
My issue is with setup_list function, which able to get only hard coded values for {DSS_URL}, {APY_KEY}, {PROJECT}. I'm not able to use PARAMS or other method like in test_scenario_run
any idea how I can pass the PARAMS also to this function?
The parameters in the mark.parametrize marker are read at load time, where the information about the config parameters is not yet available. Therefore you have to parametrize the test at runtime, where you have access to the configuration.
This can be done in pytest_generate_tests (which can live in your test module):
#pytest.hookimpl
def pytest_generate_tests(metafunc):
if "scenario" in metafunc.fixturenames:
host = metafunc.config.getoption('--host')
api = metafuc.config.getoption('--api')
project = metafuc.config.getoption('--project')
metafunc.parametrize("scenario", setup_list(host, api, project))
This implies that your setup_list function takes these parameters:
def setup_list(host, api, project):
client = dataikuapi.DSSClient(host, api)
client._session.verify = False
project = client.get_project(project)
...
And your test just looks like this (without the parametrize marker, as the parametrization is now done in pytest_generate_tests):
def test_scenario_run(scenario, params):
scenario_id = scenario["id"]
...
The parametrization is now done at run-time, so it behaves the same as if you had placed a parametrize marker in the test.
And the other test that tests setup_list now has also to use the params fixture to get the needed arguments:
def test_check_scenario_exist(params):
assert len(setup_list(params["host"], params["api"], params["project"])) > 0,
"You need at least ..."

std::lock_guard (mutex) produces deadlock

First: Thanks for reading this question and tryin' to help me out. I'm new to the whole threading topic and I'm facing a serious mutex deadlock bug right now.
Short introduction:
I wrote a game engine a few months ago, which works perfectly and is being used in games already. This engine is based on SDL2. I wanted to improve my code by making it thread safe, which would be very useful to increase performance or to play around with some other theoretical concepts.
The problem:
The game uses internal game stages to display different states of a game, like displaying the menu, or displaying other parts of the game. When entering the "Asteroid Game"-stage I recieve an exception, which is thrown by the std::lock_guard constructor call.
The problem in detail:
When entering the "Asteroid Game"-stage a modelGetDirection() function is being called to recieve a direction vector of a model. This function uses a lock_guard to make this function being thread safe. When debugging this code section this is where the exception is thrown. The program would enter this lock_guard constructor and would throw an exception. The odd thing is, that this function is NEVER being called before. This is the first time this function is being called and every test run would crash right here!
this is where the debugger would stop in threadx:
inline int _Mtx_lockX(_Mtx_t _Mtx)
{ // throw exception on failure
return (_Check_C_return(_Mtx_lock(_Mtx)));
}
And here are the actual code snippets which I think are important:
mutex struct:
struct LEMutexModel
{
// of course there are more mutexes inside here
mutex modelGetDirection;
};
engine class:
typedef class LEMoon
{
private:
LEMutexModel mtxModel;
// other mutexes, attributes, methods and so on
public:
glm::vec2 modelGetDirection(uint32_t, uint32_t);
// other methods
} *LEMoonInstance;
modelGetDirection() (engine)function definition:
glm::vec2 LEMoon::modelGetDirection(uint32_t id, uint32_t idDirection)
{
lock_guard<mutex> lockA(this->mtxModel.modelGetDirection);
glm::vec2 direction = {0.0f, 0.0f};
LEModel * pElem = this->modelGet(id);
if(pElem == nullptr)
{pElem = this->modelGetFromBuffer(id);}
if(pElem != nullptr)
{direction = pElem->pModel->mdlGetDirection(idDirection);}
else
{
#ifdef LE_DEBUG
char * pErrorString = new char[256 + 1];
sprintf(pErrorString, "LEMoon::modelGetDirection(%u)\n\n", id);
this->printErrorDialog(LE_MDL_NOEXIST, pErrorString);
delete [] pErrorString;
#endif
}
return direction;
}
this is the game function that uses the modelGetDirection method! This function would control a space ship:
void Game::level1ControlShip(void * pointer, bool controlAble)
{
Parameter param = (Parameter) pointer;
static glm::vec2 currentSpeedLeft = {0.0f, 0.0f};
glm::vec2 speedLeft = param->engine->modelGetDirection(MODEL_VERA, LEFT);
static const double INCREASE_SPEED_LEFT = (1.0f / VERA_INCREASE_LEFT) * speedLeft.x * (-1.0f);
// ... more code, I think that's not important
}
So as mentioned before: When entering the level1ControlShip() function, the programm will enter the modelGetDirection() function. When entering the modelGetDirection() function an exception will be thrown when tryin' to call:
lock_guard<mutex> lockA(this->mtxModel.modelGetDirection);
And as mentioned, this is the first call of this function in the whole application run!
So why is that? I appreciate any help here! The whole engine (not the game) is an open source project and can be found on gitHub in case I forgot some important code snippets (sorry! in that case):
GitHub: Lynar Moon Engine
Thanks for your help!
Greetings,
Patrick

NSXMLElement walk through

I have an element of a NSXMLDocument (a FCPX exported .fcpxml) which I'd like to walk-through, as opposed to getting the children and then the nested children, etc:
 
    <spine>  
     <clip name="Referee" offset="0s" duration="5s" format="r2" tcFormat="NDF">  
      <video offset="0s" ref="r3" duration="418132800/90000s">  
       <audio lane="-2" offset="0s" ref="r3" srcID="2" duration="3345062400/720000s" role="dialogue" srcCh="1, 2"/>  
       <audio lane="-1" offset="0s" ref="r3" duration="3345062400/720000s" role="dialogue" srcCh="1, 2"/>  
      </video>  
      <spine lane="1" offset="119/25s" format="r1">  
       <clip name="Referee" offset="0s" duration="403200/90000s" start="1300/2500s" format="r2" tcFormat="NDF">  
        <adjust-volume amount="-96dB"/>  
        <video offset="0s" ref="r3" duration="418132800/90000s">  
         <audio lane="-2" offset="0s" ref="r3" srcID="2" duration="3345062400/720000s" role="dialogue" srcCh="1, 2"/>  
         <audio lane="-1" offset="0s" ref="r3" duration="3345062400/720000s" role="dialogue" srcCh="1, 2"/>  
        </video>  
       </clip>  
       <transition name="Cross Dissolve" offset="313200/90000s" duration="1s">  
        <filter-video ref="r4" name="Cross Dissolve">  
         <param name="Look" key="1" value="11 (Video)"/>  
         <param name="Amount" key="2" value="50"/>  
         <param name="Ease" key="50" value="2 (In & Out)"/>  
         <param name="Ease Amount" key="51" value="0"/>  
        </filter-video>  
        <filter-audio ref="r5" name="Audio Crossfade"/>  
       </transition>  
      </spine>  
     </clip>  
     <transition name="Cross Dissolve" offset="4s" duration="1s">  
      <filter-video ref="r4" name="Cross Dissolve">  
       <param name="Look" key="1" value="11 (Video)"/>  
       <param name="Amount" key="2" value="50"/>  
       <param name="Ease" key="50" value="2 (In & Out)"/>  
       <param name="Ease Amount" key="51" value="0"/>  
      </filter-video>  
      <filter-audio ref="r5" name="Audio Crossfade"/>  
     </transition>  
    </spine>  
 
I'm thinking that using NSXMLParser would be the best bet, so I've one up like this:
 
NSXMLParser *new_parser = [[NSXMLParser alloc] initWithData:[NSData dataWithBytes:[[theXMLElement stringValue] UTF8String] length:[theXMLElement stringValue].length]];  
[new_parser setDelegate:self];  
BOOL parse_success = [new_parser parse];
  
 
But it fails as the -stringValue of the element returns a zero-length string (checked with a NSLog output). So how should I setup to parse just the above element (or similar) of a larger NSXMLDocument?
I should have used -XMLString to get a valid string for the parser. I'd seen -stringValue somewhere and it had become stuck in my head.

Better CoffeeScript syntax for jQuery parameters

I have this code for making a post request, sending some data, and logging the return value
$.post '/saveletter', {start: {x: startX, y:startY}, letter: currentLetter, unitVectors: letter.unitVectorsJson(), timeVectors: letter.timeVectorsJson()}, (data) =>
console.log data
I want to split the long parameter object into several lines, for better readability, but can't figure out the syntax that will work.
To make your code more readable, you can use the following (fiddle and compiled result):
$.post '/saveletter',
    start:
        x: startX
        y: startY
    letter: currentLetter
    unitVectors: letter.unitVectorsJson()
    timeVectors: letter.timeVectorsJson()
, (data) =>
  console.log data​​
In Coffeescript, { and } may be omitted from the object literal. And commas may be exchanged for newlines (within an object literal, not between arguments).
The following is also valid, but might be less readable (ie not obvious at the first glance):
start: x: startX, y: startY