I am trying to get a list of objects stored in the database through a foreach loop, here is my current code:
foreach (var i in ruleset)
{
var currentRule = Rule;
currentRule.OriginIP = i[0];
currentRule.DestinationIP = i[1];
currentRule.Protocol = (Protocol)Enum.Parse(typeof(Models.Protocol), i[2]);
currentRule.Ports = i[3];
_context.Rules.Add(currentRule);
Console.WriteLine(_context.ChangeTracker.DebugView.LongView);
Console.WriteLine(currentRule.RuleID);
}
_context.SaveChanges();
For some reason this only actually stores the last object in the list, I have SaveChanges() outside of the loop as I assumed that would increase performance.
When I run this I get the following:
rule {RuleID: -2147482647} Added
RuleID: -2147482647 PK Temporary
CreationDate: '26/01/2021 14:16:10'
DestinationIP: '10.232.20.20'
Enabled: 'False'
OriginIP: '192.168.10.10'
Ports: '80, 443'
Protocol: 'TCP'
0
rule {RuleID: -2147482647} Added
RuleID: -2147482647 PK Temporary
CreationDate: '26/01/2021 14:16:10'
DestinationIP: '10.232.20.21' Originally '10.232.20.20'
Enabled: 'False'
OriginIP: '192.168.10.11' Originally '192.168.10.10'
Ports: '80, 444' Originally '80, 443'
Protocol: 'TCP'
Seeing the ChangeTracker show a change for each entry, I tried to put SaveChanges() inside the loop, but then the first entry is stored and the second errors out since it attempts to use the same ID as the entry it has just saved:
rule {RuleID: -2147482647} Added
RuleID: -2147482647 PK Temporary
CreationDate: '26/01/2021 14:25:40'
DestinationIP: '10.232.20.20'
Enabled: 'False'
OriginIP: '192.168.10.10'
Ports: '80, 443'
Protocol: 'TCP'
62
rule {RuleID: 62} Added
RuleID: 62 PK
CreationDate: '26/01/2021 14:25:40'
DestinationIP: '10.232.20.21' Originally '10.232.20.20'
Enabled: 'False'
OriginIP: '192.168.10.11' Originally '192.168.10.10'
Ports: '80, 444' Originally '80, 443'
Protocol: 'TCP'
I know I must be doing something wrong but I can't find what!
var currentRule = Rule;
_context.Rules.Add(currentRule);
You keep adding this same Rule object over and over.
When you add something to EF, it keeps track of that object. This is how EF knows when an entity has been updated. EF is incapable of tracking the same in-memory object several times and pretending like they're different.
The first time, your entity gets added.
The second time, EF realizes that this is the same object as before, and therefore does not add anything new - it was already tracking this object anyway.
Make sure you add new objects, e.g.:
var currentRule = new Rule();
// set some values
_context.Rules.Add(currentRule);
You're adding the same Rule over and over. Try something like
foreach (var i in ruleset)
{
var currentRule = new Rule();
currentRule.OriginIP = i[0];
currentRule.DestinationIP = i[1];
currentRule.Protocol = (Protocol)Enum.Parse(typeof(Models.Protocol), i[2]);
currentRule.Ports = i[3];
_context.Rules.Add(currentRule);
}
_context.SaveChanges();
Related
Via a logic hook I'm trying to update fields of my products, after an invoice has been saved.
What I understand so far is, that I need to get the invoice related AOS_Products_Quotes and from there I could get the products, update the required fields and save the products. Does that sound about right?
The logic hook is being triggered but relationships won't load.
function decrement_stocks ( $bean, $event, $arguments) {
//$bean->product_value_c = $bean->$product_unit_price * $bean->product_qty;
$file = 'custom/modules/AOS_Invoices/decrement.txt';
// Get the Invoice ID:
$sInvoiceID = $bean->id;
$oInvoice = new AOS_Invoices();
$oInvoice->retrieve($sInvoiceID);
$oInvoice->load_relationship('aos_invoices_aos_product_quotes');
$aProductQuotes = $oInvoice->aos_invoices_aos_product_quotes->getBeans();
/*
$aLineItemslist = array();
foreach ($oInvoice->aos_invoices_aos_product_quotes->getBeans() as $lineitem) {
$aLineItemslist[$lineitem->id] = $lineitem;
}
*/
$sBean = var_export($bean, true);
$sInvoice = var_export($oInvoice, true);
$sProductQuotes = var_export($aProductQuotes, true);
$current = $sProductQuotes . "\n\n\n------\n\n\n" . $sInvoice . "\n\n\n------\n\n\n" . $sBean;
file_put_contents($file, $current);
}
The invoice is being retrieved just fine. But either load_relationship isn't doing anything ($sInvoice isn't changing with or without it) and $aProductQuotes is Null.
I'm working on SuiteCRM 7.8.3 and tried it on 7.9.1 as well without success. What am I doing wrong?
I'm not familiar with SuiteCRM specifics, however I'd always suggest to check:
Return value of retrieve(): bean or null?
If null, then no bean with the given ID was found.
In such case $oInvoice would stay empty (Your comment suggests that's not the case here though)
Return value of load_relationship(): true (success) or false (failure, check logs)
And I do wonder, why don't you use $bean?
Instead you seem to receive another copy/reference of $bean (and calling it $oInvoice)? Why?
Or did you mean to receive a different type bean that is somehow connected to $bean?
Then its surely doesn't have the same id as $bean, unless you specifically coded it that way.
I have a meteor app that takes the text of an article and splits it into paragraphs, sentences, words, and characters and then stores that in a json which I then save as a document in a collection. The document I am testing now ends up as 15133 bytes in mongodb.
When I insert the document it takes about 20 or 30 seconds to insert. Then sometimes it starts going through my article creation routine again and inserts another document. Sometimes it ends up inserting 3 or more documents. Sometimes it behaves as it should and only inserts 1 document into the collection.
What should I be looking for that could be causing this behavior?
Here is my code, as requested:
Meteor.methods({
'createArticle': function (text, title) {
var article = {}
article.title = title
article.userID = "sdfgsdfg"
article.text = text
article.paragraphs = []
var paragraphs = splitArticleIntoParagraphs(text)
console.log("paragraphs", paragraphs)
_.each(paragraphs, function (paragraph, p) {
if (paragraph !== "") {
console.log("paragraph", paragraph)
article.paragraphs[p] = {}
article.paragraphs[p].read = false
article.paragraphs[p].text = paragraph
console.log("paragraphs[p]", article.paragraphs[p])
var sentences = splitParagraphIntoSentences(paragraph)
article.paragraphs[p].sentences = []
}
_.each(sentences, function (sentence, s) {
if (sentence !== "") {
article.paragraphs[p].sentences[s] = {}
console.log("sentence", sentence)
article.paragraphs[p].sentences[s].text = sentence
article.paragraphs[p].sentences[s].read = false
console.log("paragraphs[p].sentences[s]", article.paragraphs[p].sentences[s])
var wordsForward = splitSentenceIntoWordsForward(sentence)
console.log("wordsForward", JSON.stringify(wordsForward))
article.paragraphs[p].sentences[s].forward = {}
article.paragraphs[p].sentences[s].forward.words = wordsForward
// var wordsReverse = splitSentenceIntoWordsReverse(sentence)
_.each(wordsForward, function (word, w) {
if (word) {
// console.log("word", JSON.stringify(word))
// article.paragraphs[p].sentences[s] = {}
// article.paragraphs[p].sentences[s].forward = {}
// article.paragraphs[p].sentences[s].forward.words = []
article.paragraphs[p].sentences[s].forward.words[w] = {}
article.paragraphs[p].sentences[s].forward.words[w].wordID = word._id
article.paragraphs[p].sentences[s].forward.words[w].simp = word.simp
article.paragraphs[p].sentences[s].forward.words[w].trad = word.trad
console.log("word.simp", word.simp)
var characters = word.simp.split('')
console.log("characters", characters)
article.paragraphs[p].sentences[s].forward.words[w].characters = []
_.each(characters, function (character, c) {
if (character) {
console.log("character", character, p, s, w, c)
article.paragraphs[p].sentences[s].forward.words[w].characters[c] = {}
article.paragraphs[p].sentences[s].forward.words[w].characters[c].text = character
article.paragraphs[p].sentences[s].forward.words[w].characters[c].wordID = Words.findOne({simp: character})._id
}
})
}
})
}
})
})
// console.log("article", JSON.stringify(article))
// console.log(JSON.stringify(article.paragraphs[10].sentences[1].forward))//.words[4].characters[0])
console.log("done")
var id = Articles.insert(article)
console.log("id", id)
return id
}
})
I call the method here:
Template.articleList.events({
"click #addArticle": function(event) {
event.preventDefault();
var title = $('#title').val();
var text = $('#text').val();
$('#title').value = '';
$('#text').value = '';
$('#text').attr('rows', '3');
Meteor.call('createArticle', text, title);
}
})
An important thing to keep in mind is that meteor methods do not work very well when it comes to perform CPU intensive tasks. Since your meteor server only works in a single thread, any kind of blocking computations - like yours - will affect all client connections, e.g. delaying DDP heartbeats. This - in turn - can result in clients thinking that the connection was dropped.
As #ghybs suggested in one of the comments your method is probably triggered several times by an impatient DDP client who thinks that server has disconnected. The easiest way to prevent this behavior is by adding noRetry flag to Meteor.apply as explained here:
https://docs.meteor.com/api/methods.html#Meteor-apply
I believe Meteor.call does not have this option.
Another strategy would be trying to make sure that your methods are idempotent, i.e. calling them more than once should not produce any additional effects. This is usually true - at least when you use method simulation - because retrying db insert will re-use the same document id which will fail on the second try. For some reason this is not happening in your case.
Finally, the problem you described clearly shows that probably a different pattern should be used for a computationally expensive task like yours. If I were you, I would start by splitting the job in several steps:
First I would make sure that the document is uploaded to the server with a POST request rather than through DDP.
Then I would implement a "process file" server side method that grabs the file which is already on the server or in the database (in case you used files collection). The first thing the method should do would be calling this.unblock(), but thats not all.
Ideally the computational task should be executed in a separated process. Only when that process is completed the method would return telling the actual caller that the job is done. But since we called this.unblock() that caller can perform different tasks, e.g. calling other methods/subscriptions while waiting for the result.
Sometimes, having a separated process will not be good enough. I've experienced situations where I had to delegate the task to another worker server(s).
I'm trying to switch my grails application from h2 to PostgreSQL.
Steps I've done to reach my goal:
Download JDBC from http://jdbc.postgresql.org/download.html (JDBC4 Postgresql Driver, Version 9.3-1100)
Attach JDBC to /lib folder
Change DataSource. Now it looks like:
dataSource {
pooled = true
driverClassName = "org.postgresql.Driver"
dialect="org.hibernate.dialect.PostgreSQLDialect"
username = "postgres"
password = "admin"
}
hibernate {
cache.use_second_level_cache = true
cache.use_query_cache = false
cache.region.factory_class = 'net.sf.ehcache.hibernate.EhCacheRegionFactory'
}
// environment specific settings
environments {
development {
dataSource {
dbCreate = "update" // one of 'create', 'create-drop', 'update', 'validate', ''
//url = "jdbc:h2:mem:devDb;MVCC=TRUE;LOCK_TIMEOUT=10000"
url = "jdbc:postgresql://localhost:5432/admin_panel"
}
}
test {
dataSource {
dbCreate = "update"
url = "jdbc:postgresql://localhost:5432/admin_panel"
}
}
production {
dataSource {
dbCreate = "update"
url = "jdbc:postgresql://localhost:5432/admin_panel"
pooled = true
properties {
maxActive = -1
minEvictableIdleTimeMillis=1800000
timeBetweenEvictionRunsMillis=1800000
numTestsPerEvictionRun=3
testOnBorrow=true
testWhileIdle=true
testOnReturn=true
validationQuery="SELECT 1"
}
}
}
}
And now the game starts. I type 'run-app' in GGTS and I get an error. Objects I'm trying to create using BootStrap cannot be initialized because of Validation: Error initializing the application: Validation Error(s) occurred during save() .
It is really strange because the message says that reference to previously created object is null: Field error in object 'adminpanel.component.Text' on field 'subpage': rejected value [null];.
There should be no possibility that "subpage" is null in this line, so I go to the pgAdmin III to check if this record is created and there I notice that no table is created at all.
Everetyhing works if application is connected to H2, but starts to freak out when I switch it to postgres. Additionally, when I remove everything from BootStrap, application starts and I can create objects normally, but I still cannot see them into pgAdmin. Do you have any advice what else can I check or why GORM does not create tables in my app when I use PostgreSQL ?
Thanks in advance.
EDIT:
I found the source of the problem after few tests more...
PostgreSQL gives a strange value for 'id' column in every table. When I was using H2, I had values from 1..x in every table, in PostgreSQL I have something like this:
table1
id:
1
2
3
-
7
8
9
table2
id:
4
5
6
-
10
11
As you probably noticed, values are given interchangeably for all rows in different tables, so I cannot have e.g. object table1 with id 1 and object table2 with id 1. Do you have idea why?
Grails/Hibernate uses Sequence for object ID for databases like Postgres (or Oracle, etc). By default, Grails uses a shared sequence (hibernate_sequence). So all object will have uniq id, but unique per whole database, not per table.
You can configure domain to use a different Sequence for a domain, like:
static mapping = {
id generator: 'sequence', params: [sequence: 'my_own_sequence']
}
See also
http://www.postgresql.org/docs/9.1/static/sql-createsequence.html
http://grails.org/doc/2.3.4/ref/Database%20Mapping/id.html
Does anyone know how do I force TCP when using Resolv::DNS?
It seems that when I ask for ANY records, the output is truncated and I get partial results. When I perform many queries (one for each record type) I get more results. I also get inconsistent results (vary between machines, two sequential queries return different results,...)
I thought it could have something to do with UDP being bounded to packet size.
Any idea how I can force it to use TCP? Any other DNS pakcage that I can use?
I had this same problem, wanting to use Resolv for TCP-only queries as I was expecting result sets that were quite large. I ended up digging through Resolv's source code and learned that, by default, TCP queries are only ever performed if the UDP query fails. I found that I could subclass Resolv::DNS and override the each_resource method. Here's my source:
require 'resolv'
# A TCP-only resolver built from `Resolv::DNS`. See the docs for what it's about.
# http://ruby-doc.org/stdlib-1.9.3/libdoc/resolv/rdoc/Resolv/DNS.html
class TcpDNS < Resolv::DNS
# Override fetch_resource to use a TCP requester instead of a UDP requester. This
# is mostly borrowed from `lib/resolv.rb` with the UDP->TCP fallback logic removed.
def each_resource(name, typeclass, &proc)
lazy_initialize
senders = {}
requester = nil
begin
#config.resolv(name) { |candidate, tout, nameserver, port|
requester = make_tcp_requester(nameserver, port)
msg = Message.new
msg.rd = 1
msg.add_question(candidate, typeclass)
unless sender = senders[[candidate, nameserver, port]]
sender = senders[[candidate, nameserver, port]] =
requester.sender(msg, candidate, nameserver, port)
end
begin # HACK
reply, reply_name = requester.request(sender, tout)
rescue
return
end
case reply.rcode
when RCode::NoError
extract_resources(reply, reply_name, typeclass, &proc)
return
when RCode::NXDomain
raise Config::NXDomain.new(reply_name.to_s)
else
raise Config::OtherResolvError.new(reply_name.to_s)
end
}
ensure
requester.close
end
end
end
Then using it is as easy as follows:
TcpDNS.open :nameserver => ns_addrs, :search => '', :ndots => 1 do |dns|
resp = dns.getresources target, Resolv::DNS::Resource::IN::ANY
end
I'm trying to test detaching an entity from one context, making modifications to it, creating a new context, attaching it, and having the changes made between sessions persist. I don't seem to be able to get this working appropriately. I've tried calling DetectChanges as well as ApplyCurrentValues w/ no success. Below is what I've got so far. These aren't POCO's and I don't want to treat them as such. I just want to be able to detach an entity, make changes to it, and re-attach it. Thanks!
OCConsumer consumer;
using (var ctx1 = new CMSStagingContext())
{
consumer = (from c in ctx1.OCConsumers
select c).FirstOrDefault();
Console.WriteLine("Retrieved {0} - {1} {2}",
consumer.CustomerId, consumer.FirstName, consumer.LastName);
ctx1.Detach(consumer);
}
consumer.BirthDate = "10/22/1981";
using (var ctx2 = new CMSStagingContext())
{
ctx2.Attach(consumer);
ctx2.ApplyCurrentValues("OCConsumers", consumer);
ctx2.SaveChanges(System.Data.Objects.SaveOptions.DetectChangesBeforeSave | System.Data.Objects.SaveOptions.AcceptAllChangesAfterSave);
}
When you attach an object to a context, the context is going to presume that the object is unmodified, unless you tell it otherwise. The simplest way to do this is to attach the object to the context first, then modify it. So you could change your code to:
OCConsumer consumer;
using (var ctx1 = new CMSStagingContext())
{
consumer = (from c in ctx1.OCConsumers
select c).FirstOrDefault();
Console.WriteLine("Retrieved {0} - {1} {2}",
consumer.CustomerId, consumer.FirstName, consumer.LastName);
ctx1.Detach(consumer);
}
using (var ctx2 = new CMSStagingContext())
{
ctx2.Attach(consumer);
consumer.BirthDate = "10/22/1981";
ctx2.SaveChanges(System.Data.Objects.SaveOptions.DetectChangesBeforeSave | System.Data.Objects.SaveOptions.AcceptAllChangesAfterSave);
}
Another approach would be to use Context.ObjectStateManager.ChangeObjectState.