Solidity migrate contract dry run vs for real - deployment

Summary: Using truffle to deploy to rinkeby via infura.
I just deployed my crowdsale and that seems to succeed. However truffle is not showing me a contract address which is worrying. I notice Migrations dry-run (simulation) at the top, which makes me wonder if its not being deployed, its just testing it... is this a thing? How do i get it to actually deploy?

OK as this was hard to debug, I have an answer that may help others.
Using the latest bleeding edge truffle, I was informed through a warning, to use the 1.0.0-web3one.0 version of truffle-hdwallet-provider
Once I installed that, I could get past the simulation. When migrating for the rinkeby/live networks, a simulation will be attempted before actual deployment. This didn't seem documented anywhere and as truffle hung after simulation completed, this was a real brainer....
Although it seems obvious now, if there is any time related code (such as a start time for a crowdsale) this needs to be minutes into the future for deployment. Not something obvious when using ganache - I had mine 20 seconds into the future, but by the time that would have been deployed, that was now in the past, causing a revert on my contracts
I'm making quite good progress with the new 1.0 version of ethereum tools, and beta of truffle, so shout if I an be of assistance!

Please try put it in the networks section, the option skipDryRun: true
module.exports = {
networks: {
...
ropsten:{
provider: () => new HDWalletProvider(mnemonics, endpoint),
network_id: 3,
gas: 5000000,
confirmation: 2,
timeoutBlocks: 200,
skipDryRun: true
},
...
}
}

You can use Etherlime for deployment of smart contracts. Actually you can use it for everything instead of truffle. It is simpler and with more information. In my opinion is the better tool. It is based on ethersjs which is a lot better than web3js. Here is a link to the documentation

Related

What's the difference between module-remap-source and module-virtual-source in PulseAudio

If I run the following command, I get a "virtual microphone" that's hooked up to a sink called "MicOutput". If I send data to "MicOutput", that data is then sent to the virtual microphone.
pactl load-module module-null-sink sink_name=MicOutput sink_properties=device.description="MicOutput"
pacmd load-module module-virtual-source source_name=VirtualMic master=MicOutput.monitor
I can get similar behavior if I replace the second line with:
pactl load-module module-remap-source source_name=Remap-Source master=MicOutput.monitor
The main difference I see is that the latency is lower.
But what's the difference? When would I want to use one, or the other?
My Research so far
I see these two files:
https://fossies.org/linux/pulseaudio/src/modules/module-remap-source.c (added in 2013)
https://fossies.org/linux/pulseaudio/src/modules/module-virtual-source.c (added in 2010)
Perhaps if I looked at the code hard enough I'd understand the answer. I wonder if someone happens to know the answer though?
module-virtual-source is not typically used. It's an example of how a "filter source" should be implemented.
Module-remap-source has much less overhead
Source: I asked the PulseAudio team. https://lists.freedesktop.org/archives/pulseaudio-discuss/2022-April/032260.html

Clustering in AEM

I am facing an error, which is something peculiar. I am using AEM 5.6.1.
I have 2 author instances(a1 and a2) and both are in cluster. We are performing tar optimization on the instances daily between 2a.m. - 5a.m.(London Timezone). Now, in the error.log of a2, I am seeing the below error everyday in the above mentioned time:
419 ERROR [pool-6-thread-1] org.apache.sling.discovery.impl.cluster.ClusterViewServiceImpl getEstablishedView: the existing established view does not incude the enter code herelocal instance yet! Assming isolated mode.
Now, I did some research on this and has come to know that, AEM users ClusterViewServiceImpl.java for clustering. And in that, the below mentioned code snippet is basically failing:
EstablishedClusterView clusterViewImpl = new EstablishedClusterView(
config, view, getSlingId());
boolean foundLocal = false;
for (Iterator<InstanceDescription> it = clusterViewImpl
.getInstances().iterator(); it.hasNext();) {
InstanceDescription instance = it.next();
if (instance.isLocal()) {
foundLocal = true;
break;
}
}
if (foundLocal) {
return clusterViewImpl;
} else {
logger.info("getEstablishedView: the existing established view does not incude the local instance yet! Assuming isolated mode.");
return getIsolatedClusterView();
}
Can someone help me to understand more in depth regarding the same. Does it mean that, the clustering is not properly working? What can be the possible impacts because of this error?
I think you've got a classic case of split brain.
Clustering authors is not a good approach and has been disfavoured in future versions of AEM, as the authors often get out of sync when they can't talk to each other for whatever reason, usually temporarily network related. Believe me, they are sensitive.
When communication drops, the slave thinks it no longer has a master, and claims to be the master itself. When that occurs, and communication is re-established the damage has been done as there is no recovery mechanism.
At best, only ever allow users to connect to the primary author and have the secondary author as a High Availability server.
Better still, set up replication from the primary author that everyone writes to, and have it auto replicate on write to the secondary backup author.
Hope that helps.

How do I disable Celery's default timeout for a task, and/or prevent it from retrying?

I'm having some troubles with celery. Unfortunately the person who set it up isn't working here any more, and until now we've never had problems and thought we understood how it works well enough. Now it has become clear that we don't, and after hours of searching through documentation and other posts on here, I have to admit defeat. Hopefully, someone here can shed some light on what I am missing.
We're using several tasks, all of them are defined in a CELERYBEAT_SCHEDULE like this:
CELERYBEAT_SCHEDULE = {
'runs-every-5-minutes': {
'task': 'tasks.webhook',
'schedule': crontab(minute='*/5'),
'args': (WEBHOOK_BASE + '/task/refillordernumberbuffer', {'refill_count': 1000})
},
'send-sameday-delivery-confirmation': {
'task': 'tasks.webhook',
'schedule': crontab(minute='*/2'),
'args': (WEBHOOK_BASE + '/task/sendsamedaydeliveryconfirmation', {})
},
'send-customer-hotspot-notifications': {
'task': 'tasks.webhook',
'schedule': crontab(hour=9, minute=0),
'args': (WEBHOOK_BASE + '/task/sendcustomerhotspotnotifications', {})
},
}
That's not all of them, but they all work like this. All of those are actually PHP scripts that have no knowledge of the whole celery concept. They are just scripts that execute certain things, and send notifications if necessary. When they are done, they just spit out a JSON response that says success=true.
As far as I know, celery is only used to execute them periodically. We don't have problems with any of them except the last one from my code snippet. That task/script sends out emails, usually 5 to 10, but sometimes a lot more. And that's where the problems start, because (as far as I could examine by watching in celery events, I could honestly not find any confirmation for this in the docs anywhere) when the successful JSOn response from the PHP script doesn't arrive within 3 minutes, celery retries the task, and the script sends a lot of emails again. And again, because just a small amount of emails was saved as "done" form the tasks initial run. This often leads to 4 or 5 retries until finally enough emails were marked as "successfully sent" by the prior retries that finally the last retry finishes under this mystical 3 minute limit.
My questions:
Is there a default time limit? Where is it set? How do I override it? I've read about time_limit and soft_time_limit, but nothing I tried in the config seemed to help. If this is the solution, I would be in need of assistance as to how the settings are properly applied.
Can't I "just" disable the whole retry concept (for one task or for all, doesn't really matter) altogether? It seems to me that we don't need it, as we're running our tasks periodically and missing one due to a temporary error would not matter. I guess that means we shouldn't have used celery in the first place as we're misusing it, but for now I'd just like to understand it better.
Thanks for any help, and sorry if I left anything unclear – happy to answer any follow-up questions and provide more details if necessary.
The rest of the config file goes like this:
## Broker settings.
databases = parse_databases_xml()
settings = parse_custom_settings_xml()
BROKER_URL = 'redis://' + databases['taskqueue']['host'] + '/' + databases['taskqueue']['dbname']
# List of modules to import when celery starts.
CELERY_IMPORTS = ("tasks", )
## Using the database to store task state and results.
CELERY_RESULT_BACKEND = BROKER_URL
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_ANNOTATIONS = {
"*": {"rate_limit": "100/m"},
"ping": {"rate_limit": "100/m"},
}
There is no time_limit to be found anywhere, so I don't think we're setting it ourselves. I left out the python imports and the functions that read from our config xml files, as that stuff is all working fine and just concerns some database auth data.

EF6/Code First: Super slow during the 1st query, but only in Debug

I'm using EF6 rc1 with Code First strategy, without precompiled views and the problem is:
If I compile and run the exe application it takes like 15 seconds to run the first query (that's okay, since I'm still working on the pre-generated views). But if I use Visual Studio 2013 Preview to Debug the exact same application it takes almost 2 minutes BEFORE running the first query:
Dim Context = New MyEntities()
Dim Query = From I in Context.Itens '' <--- The debug takes 2 minutes in here
Dim Item = Query.FirstOrDefault()
Is there a way to remove this extra time? Am I doing something wrong here?
Ps.: The context itself is not complicated, its just full with 200+ tables.
Edit: Found out that the problem is that during debug time the EF appears to be generating the Views ignoring the pre-generated ones.
Using the source code from EF I discovered that the property:
IQueryProvider IQueryable.Provider
{
get
{
return _provider ?? (_provider = new DbQueryProvider(
GetInternalQueryWithCheck("IQueryable.Provider").InternalContext,
GetInternalQueryWithCheck("IQueryable.Provider").ObjectQueryProvider));
}
}
is where the time is being consumed. But this is strange since it only takes time in debug. Am I missing something here?
Edit: Found more info related to the question:
Using the Process Monitor (by Sysinternals) I found out that there its the 'desenv.exe' process that is consuming tons of time. To be more specific its consuming time with an 'Thread Exit'. It repeats the Thread Exit stack 36 times. I don't know if this info is very useful, but I saved a '.cvs' with the stack, here is his body: [...] (edit: removed the '.cvs' body, I can post it again by the comments if someone really think its going to be useful, but it was confusing and too big.)
Edit: Installed VS2013 Ultimate and Entity Framework 6 RTM. Installed the Entity Framework Power Tools Beta 4 and used it to generate the Views. Nothing changed... If I run the exe it takes 20 seconds, if I 'Start' debugging it takes 120 seconds.
Edit: Created a small project to simulate the error: http://sdrv.ms/16pH9Vm
Just run the project inside the environment and directly through the .exe, click the button and compare the loading time.
This is a known performance issue in Lazy (which EF is using) when the debugger is attached. We are currently working on a fix (the current approach we are looking at is removing the use of Lazy). We hope to ship this fix in a patch release soon. You can track progress of this issue on our CodePlex site - http://entityframework.codeplex.com/workitem/1778.
More details on the coming 6.0.2 patch release that will include a fix are here - http://blogs.msdn.com/b/adonet/archive/2013/10/31/ef6-performance-issues.aspx
I don't know if you have found the solution. But in my case, I had similar issue which wasted me close to a week after trying different suggestions. Finally, I found a solution by changing my web.config to optimizeCompilations="true" and performance improved dramatically from 15-30 seconds to about 2 seconds.

the svnpoller is not triggered (warning in the twistd.log)

I am not sure what is going on, but i get this weird issue with buildbot.
The SVNPoller is configured as it should (checked various config example files), when i run the buildbot checkconfig it says that everything is fine....but it won't work at all.
If i trigger a build via the scheduler class it works fine, i can retrieve the source updates and build without problems (tried with a 1h timeframe).
The problem thou is that the poller is not working, so even if i build each hour, the changes column stays empty (i get the changes for the various versions thou, so if i click on the build detail i can see the sourcestamp carrying the right and most recent revision everytime that i modify the codebase); so I have no way to know if the build fails who did the last change.
Another peculiar thing is that in the twistd.log i see this line:
Warning: no ChangeSources specified in c['change_source']
And i am not sure why it wouldn't work since the checkconfig does not raise any error.
The result of this is of course that the only thing built is the hourly one, leaving me without the poller, and without knowing who is putting code in each build.
This is the code for the poller:
c['change source']=SVNPoller
(svnurl="svn+ssh://user#svnserver.domain.com/svn/project/trunk,
pollinterval=60*5,
histmax=10,
project=myproj,
svnbin = '/usr/bin/svn')
So far it looks good, so I am not really sure what is wrong here...why the SVNPoller is not triggering any build.
Anyone that has some suggestions about why is this happening ? Is there any other way to get changes from an SVN server? I am a total newbie at BuildBot and I am not really getting too much out of the manual; that looks much more like a scholastic book instead of being a manual that shows you how you do stuff :)
Thanks!!!!!
Ok, silly me :) the problem is the missing underscore on change_source...once added it the problem is solved
c['change_source'] = SVNPoller (svnurl=source_svn_url,
pollinterval=60,
histmax=10,
project='The_project',
svnbin= '/usr/bin/svn'
)
this will poll the svn codebase at source_svn_url (just put your svn:// path); and will check every minute to see if anyone has done changes; and will keep 10 changes in the record list (any change after the 10th will not show up so use it carefully if you do a lot of commits).
Hope that this helps who uses buildbot!