"consecutive SC failures" on gem5 simple config script - cpu-architecture

I am new to gem5 and I ran into a problem while trying to write a simple multi-core system configuration script. my script is based on the example scripts given on: http://learning.gem5.org/book/part1/cache_config.html
When i try to add more than one dcache to the system (for each different core) im getting an infinite loop of this warning message:
warn: 186707000: context 0: 10000 consecutive SC failures.
incremented by 10000 each time.
I tried looking in gem5's given configuration scripts se.py and CacheConfig.py but I still cant understand what im missing here. I know that I can just simulate this configuration using se.py but I tried to do this by myself as practice and to get a deeper understanding of the gem5 simulator.
some additional info: im running gem5 in se mode and trying to simulate a simple multicore system using riscv cores.
this is my code:
import m5
from m5.objects import *
from Caches import *
#system config
system = System(cpu = [TimingSimpleCPU(cpu_id=i) for i in xrange(4)])
system.clk_domain = SrcClockDomain()
system.clk_domain.clock = '1GHz'
system.clk_domain.voltage_domain = VoltageDomain()
system.mem_mode = 'timing'
system.mem_ranges = [AddrRange('512MB')]
system.cpu_voltage_domain = VoltageDomain()
system.cpu_clk_domain = SrcClockDomain(clock = '1GHz',voltage_domain= system.cpu_voltage_domain)
system.membus = SystemXBar()
system.l2bus = L2XBar()
multiprocess =[Process(cmd = 'tests/test-progs/hello/bin/riscv/linux/hello', pid = 100 + i) for i in xrange(4)]
#cpu config
for i in xrange(4):
system.cpu[i].icache = L1ICache()
system.cpu[i].dcache = L1DCache()
system.cpu[i].icache_port = system.cpu[i].icache.cpu_side
system.cpu[i].dcache_port = system.cpu[i].dcache.cpu_side
system.cpu[i].icache.mem_side = system.l2bus.slave
system.cpu[i].dcache.mem_side = system.l2bus.slave
system.cpu[i].createInterruptController()
system.cpu[i].workload = multiprocess[i]
system.cpu[i].createThreads()
system.l2cache = L2Cache()
system.l2cache.cpu_side = system.l2bus.master
system.l2cache.mem_side = system.membus.slave
system.system_port = system.membus.slave
system.mem_ctrl = DDR3_1600_8x8()
system.mem_ctrl.range = system.mem_ranges[0]
system.mem_ctrl.port = system.membus.master
root = Root(full_system = False , system = system)
m5.instantiate()
print ("Begining Simulation!")
exit_event = m5.simulate()
print ('Exiting # tick {} because {}' .format(m5.curTick() , exit_event.getCause()))

Related

STK MATLAB interface. Trying to access Range Rate data

I'm trying to get the Range Rate data between a satellite and ground site and everything works up until the last line. I've followed the online example but I get the error (below) when running this MATLAB script:
stk = stkApp.Personality2;
stkScenario = stk.CurrentScenario;
if isempty(stkScenario)
error('Please load a scenario');
end
facility = stk.GetObjectFromPath('Facility/RRFac');
satellite = stk.GetObjectFromPath('Satellite/P02S01');
access = satellite.GetAccessToObject(facility);
access.ComputeAccess;
accessDP = access.DataProviders.Item('Access Data').Exec(stkScenario.StartTime,stkScenario.StopTime);
accessStartTimes = accessDP.DataSets.GetDataSetByName('Start Time').GetValues;
accessStopTimes = accessDP.DataSets.GetDataSetByName('Stop Time').GetValues;
accessIntervals = access.ComputedAccessIntervalTimes;
accessDataProvider = access.DataProviders.Item('Access Data');
dataProviderElements = {'Start Time';'Stop Time'};
accessIntervals = access.ComputedAccessIntervalTimes;
for i = 1:1:accessIntervals.Count
[start, stop] = accessIntervals.GetInterval(i-1);
satelliteDP = satellite.DataProviders.Item('DeckAccess Data').Group.Item('Start Time LocalHorizontal Geometry').ExecElements(accessStartTimes{1},accessStopTimes{1},{'Time';'Range Rate'});
satelliteAlt = satelliteDP.DataSets.GetDataSetByName('Range Rate').GetValues;
end
Error using Interface.AGI_STK_Objects_12_IAgDrDataSetCollection/GetDataSetByName Invoke Error, Dispatch Exception: The parameter is incorrect.
Error in GenRRreport (line 37)
satelliteAlt = satelliteDP.DataSets.GetDataSetByName('Range Rate').GetValues
Why does it throw this error and how to avoid that?

How can I use Telescope.nvim to complete a path in insert mode?

Fzf.vim implements a fuzzy finder in insert mode, extending the native ins-completion functionality.
For example: in insert mode, we can map <c-x><c-f> to enable Fzf.vim to fuzzy find and insert file names with relative paths (fzf-complete-path). The same functionalities are implemented for words (fzf-complete-word) and lines (fzf-complete-line) completions.
Here are the Fzf.vim mapping example of these functions:
" Insert mode completion
imap <c-x><c-k> <plug>(fzf-complete-word)
imap <c-x><c-f> <plug>(fzf-complete-path)
imap <c-x><c-l> <plug>(fzf-complete-line)
How can I set the same behavior with Telescope.nvim?
You can define a custom action and map it to a key to execute it in Telescope.
I've written a set of actions for this task (to insert file paths in the current buffer): telescope-insert-path.nvim
You just need to make your custom mappings in the require'telescope'.setup().
local path_actions = require('telescope_insert_path')
require('telescope').setup {
defaults = {
mappings = {
n = {
["pi"] = path_actions.insert_relpath_i_insert,
["pI"] = path_actions.insert_relpath_I_insert,
["pa"] = path_actions.insert_relpath_a_insert,
["pA"] = path_actions.insert_relpath_A_insert,
["po"] = path_actions.insert_relpath_o_insert,
["pO"] = path_actions.insert_relpath_O_insert,
["Pi"] = path_actions.insert_abspath_i_insert,
["PI"] = path_actions.insert_abspath_I_insert,
["Pa"] = path_actions.insert_abspath_a_insert,
["PA"] = path_actions.insert_abspath_A_insert,
["Po"] = path_actions.insert_abspath_o_insert,
["PO"] = path_actions.insert_abspath_O_insert,
["<leader>pi"] = path_actions.insert_relpath_i_visual,
["<leader>pI"] = path_actions.insert_relpath_I_visual,
["<leader>pa"] = path_actions.insert_relpath_a_visual,
["<leader>pA"] = path_actions.insert_relpath_A_visual,
["<leader>po"] = path_actions.insert_relpath_o_visual,
["<leader>pO"] = path_actions.insert_relpath_O_visual,
["<leader>Pi"] = path_actions.insert_abspath_i_visual,
["<leader>PI"] = path_actions.insert_abspath_I_visual,
["<leader>Pa"] = path_actions.insert_abspath_a_visual,
["<leader>PA"] = path_actions.insert_abspath_A_visual,
["<leader>Po"] = path_actions.insert_abspath_o_visual,
["<leader>PO"] = path_actions.insert_abspath_O_visual,
-- Additionally, there's normal mode mappings for the same actions:
-- ["<leader><leader>pi"] = path_actions.insert_relpath_i_normal,
-- ...
}
}
}
}

Swift unable to access other application windows

I'm trying to access the windows of running applications through my application. Currently, I am grabbing the data of all running applications, and the kCGWindowNumber mapped to the applications, but I am having trouble getting the window mapped to that kCGwindowNumber.
How can I reference my windows of other running applications through mine? I have the kCGWindowNumber of those applications, but cannot figure out how to get access to the windows.
This is how I grab my data
let options = CGWindowListOption(arrayLiteral: CGWindowListOption.excludeDesktopElements, CGWindowListOption.optionOnScreenOnly)
let windowListInfo = CGWindowListCopyWindowInfo(options, CGWindowID(0))
Then, I store it into an object, which represents my running application. E.g.
{
kCGWindowAlpha = 1;
kCGWindowBounds = {
Height = 22;
Width = 3840;
X = 5520;
Y = 0;
};
kCGWindowLayer = 24;
kCGWindowMemoryUsage = 341416;
kCGWindowNumber = 190;
kCGWindowOwnerName = "Google Chrome";
kCGWindowOwnerPID = 272;
kCGWindowSharingState = 1;
kCGWindowStoreType = 2;
}
Now that I have the kCGWindowNumber I was trying to access the other applications with this
NSApplication.shared.window(withWindowNumber: kCGWindowNumber_FOR_WINDOW_I_WANT)
but, after reading the documentation I realized that NSApplication.shared returns Returns the application instance, and these windows are not not apart of my application.

Icinga2 check_load threshold on master node

I'm having an issue locating where to change the thresholds for the check_load plugin on the Icinga2 master node.
The best way is to redefine that command by adding the following to your commands.conf file in your conf.d directory. Add the following replacing <load> with whatever you want to call the command:
object CheckCommand "<load>" {
import "plugin-check-command"
command = [ PluginDir + "/check_load" ]
timeout = 1m
arguments += {
"-c" = {
description = "Exit with CRITICAL status if load average exceed CLOADn; the load average format is the same used by 'uptime' and 'w'"
value = "$load_cload1$,$load_cload5$,$load_cload15$"
}
"-r" = {
description = "Divide the load averages by the number of CPUs (when possible)"
set_if = "$load_percpu$"
}
"-w" = {
description = "Exit with WARNING status if load average exceeds WLOADn"
value = "$load_wload1$,$load_wload5$,$load_wload15$"
}
}
vars.load_cload1 = 10
vars.load_cload15 = 4
vars.load_cload5 = 6
vars.load_percpu = false
vars.load_wload1 = 5
vars.load_wload15 = 3
vars.load_wload5 = 4
}
The values you'll want to change are vars.load_cload1-15 and vars.wload1-15 or set them to varibles that you can set in the service definition with $variablename$.
Then in services.conf use the new name of your check command.

Prevent Quartz job recovery from starting on long-runnng tasks

I'm building Quartz jobs with the RequestRecovery flag set, because I have multiple server instances running and if one fails while running a job, I'd like another instance to pick up that job and restart it.
IJobDetail job = JobBuilder.Create<SystemMaintenance>()
.WithIdentity("SystemMaintenance", "SystemMaintenance")
.RequestRecovery(true)
.WithDescription("System task that runs every 6 hours")
.Build();
However, what I've noticed is that if the job is running for a very long time (say over 10 minutes), the other instances will start running the job because they assume that it has failed, even though it is running fine.
Is there any way that I can periodically ping Quartz, perhaps through JobExecutionContext, to let it know that a specific instance is still running and processing a job, to prevent others from assuming failure and starting it?
EDIT:
My configuration looks like this:
NameValueCollection properties = new NameValueCollection();
properties["quartz.scheduler.instanceName"] = "Scheduler";
properties["quartz.scheduler.instanceId"] = "AUTO";
properties["quartz.threadPool.type"] = "Quartz.Simpl.SimpleThreadPool, Quartz";
properties["quartz.threadPool.threadCount"] = "5";
properties["quartz.threadPool.threadPriority"] = "Normal";
properties["quartz.jobStore.misfireThreshold"] = "60000";
properties["quartz.jobStore.type"] = "Quartz.Impl.AdoJobStore.JobStoreTX, Quartz";
properties["quartz.jobStore.useProperties"] = "true";
properties["quartz.jobStore.dataSource"] = "default";
properties["quartz.jobStore.tablePrefix"] = "QRTZ_";
properties["quartz.jobStore.clustered"] = "true";
properties["quartz.jobStore.driverDelegateType"] = "Quartz.Impl.AdoJobStore.SqlServerDelegate, Quartz";
properties["quartz.dataSource.default.connectionString"] = ConnectionStringService.DatabaseConnectionString;
properties["quartz.dataSource.default.provider"] = "SqlServer-20";