issues deploying solidity smart contract to rinkeby test network - deployment

I'm using openZeppelin to make a crowdsale contract, all (30 of them) my tests pass with flying colours ;) and I can migrate on a locall ganache blockchain no problem.
When I try to deploy to rinkeby I start having issues. My config in truffle.js is
rinkeby: {
provider: rinkeybyProvider,
network_id: 3,
gas: 4712388,
gasPrice: web3.utils.toWei("40", "gwei"),
websockets: true,
from: "0x9793371e69ed67284a1xxxx"
}
When I deploy on rinkeby I get:
"SplitWallet" hit a require or revert statement somewhere in its
constructor. Try: * Verifying that your constructor params satisfy
all require conditions. * Adding reason strings to your require
statements.
I have gone through and put messages in every revert in the constructor hierachy, but I never see any of the messages. I thought it might be that my payees and shares were different lengths but, no, they are the same (only parameters that the constructor for a splitwallet take)
Things to note:
I have an infura api key
I am using truffle-wallet-provider provider, with just a private key (no mnemonic) to deploy
I am confused (due to the above), how my deploy script, can know multiple (10) wallets on deployment. Usually (in ganache) these are the 10 wallets ganache generates for you, but here, I am providing a private key, so it shouldn't be able to know 10 wallets, just one - the public key of the private key that is deploying the contract, no? (talking about here):
module.exports = async (
deployer,
network,
[owner, purchaser, investor, organisation, ...accounts] //how does it know these??
)
This last point, makes me wonder, because I printed out owner/purchaser and they dont match my public key wallet at all, so I have no idea where they are coming from. And if they dont match, and it defaults to the owner being accounts[0], then that wallet may not be able to pay for the gas.... perhaps??
Thanks

Rinkeby network id is 4, not 3.

Related

How to send signed manifest to the PTE to interact with a published package

I managed to publish a package to the PTE via resim publish.
Now I am stuck as I have the following problem:
How to send a signed manifest to the PTE (from my account as I need a badge that will be returned)?
In order to create a Manifest, and sign it, you must back an element which upon activation constructs a Manifest, sends it to the PTE Extension to be signed, and receives the results.
Here is some sample code, this is the Typescript part:
document.getElementById('instantiateMainComponent')!.onclick = async function () {
// Construct manifest
const manifest = new ManifestBuilder()
// Instantiates component
.callFunction(Package_Address, 'ComponentName', 'instantiate', [])
// Deposits returned resources to account
.callMethodWithAllResources(Account_Address, 'deposit_batch')
.build()
.toString();
// Send manifest to extension for signing
const receipt = await signTransaction(manifest);
// Add results here
}
And here is my associated HTML:
<h2>3. Instantiate Main Component</h2>
<p><button id="instantiateMainComponent">Instantiate</button></p>
This was one of the things I needed to do not too long ago. To clarify, for the public test environment (only the PTE, nothing else) not all transactions require signatures. Matter of fact, only transactions which withdraw funds from an account require a signature, nothing else requires one. This means the following:
For all transactions which do not withdraw funds from an account, you can continue using the PTE API to send your transactions and not sign them.
If signing transactions is necessary, then my recommendation is to construct and sign your transactions using Rust since it already has an SBOR implementation and you can very easily sign all of your transactions with Rust + the existing Scrypto libraries.
Regarding the Rust implementation, here is an example code I wrote in Rust which signs transactions and submits them to the PTE (PTE01 but you can change it to PTE02): https://github.com/0xOmarA/PTE-programmatic-interactions/blob/main/src/main.rs
Here is an example of this code being used in action: https://github.com/0xOmarA/RaDEX/tree/main/bootstrap

Issue with getOrCreateAssociatedAccountInfo on Quicknode

I just switched to Quicknode (testnet) as the public Solana node has IP limits. I notice that when I call token.getOrCreateAssociatedAccountInfo I encounter an issue which never happened on the main public node:
{"name":"Error","message":"Failed to find account","stack":"Error: Failed to find account\n at Token.getAccountInfo (/var/www/node_modules/#solana/spl-token/lib/index.cjs.js:493:13)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async Token.getOrCreateAssociatedAccountInfo (/var/www/node_modules/#solana/spl-token/lib/index.cjs.js:338:16)\n at async SolanaBlockchainAPI.reward (/var/www/src/datasources/solanaBlockchain.js:266:35)
Is there some sort of compatibility issue ?
my code...
const token = new Token(
connection,
new web3.PublicKey(token_type.token_address),
TOKEN_PROGRAM_ID,
this.appTreasPair
);
const recipientTokenAddress = await token.getOrCreateAssociatedAccountInfo(
new web3.PublicKey(solana_public_address)
);
From the error it looks like there's some trouble locating the account that you're plugging into the getOrCreateAssociatedAccountInfo call.
There's not a lot of info to work with here, but my initial guess is you were working on the public solana devnet, and plugged into a testnet QuickNode URL, which would explain why the account isn't available for you.
Solution here would be to make sure you're on devnet instead of testnet when creating your QuickNode endpoint. Testnet really isn't used very much outside of people testing infrastructure. You're usually either working on production stuff (mainnet) or want to test out different functionalities on the developer testing net (devnet).

How do I change the API version of Simple Salesforce

I went to \simple_salesforce and changed a line in api.py by hand from
DEFAULT_API_VERSION = '42.0'
to
DEFAULT_API_VERSION = '51.0'
But it feels incorrect to do it like this. Is there some other way?
There's bit of text in readme in "additional features".
SalesforceLogin, which takes in a username, password, security token,
optional version and optional domain
(...)
SFType class, which is
used internally by the getattr() method in the Salesforce() class
and represents a specific SObject type. SFType requires object_name
(i.e. Contact), session_id (an authentication ID), sf_instance
(hostname of your Salesforce instance), and an optional sf_version
So looks like you can pass sf_version to SalesforceLogin() call and it'll be respected. Or version to Salesforce(). Check the files and experiment? Maybe even make a pull request in simple's Git repo so they update the default. 42 was over 3 years ago. It's perfectly fine to use newer API to see more tables, get some performance boost, bugfixes.

IdentityServer3 idsrv.partial cookie gets too big

After login when redirecting the user using context.AuthenticateResult = new AuthenticateResult(<destination>, subject, name, claims) the partial cookie gets so big that it contains up to 4 chunks and ends up causing "request too big" error.
The number of claims is not outrageous (in the 100 range) and I haven't been able to consistently reproduce this on other environments, even with larger number of claims. What else might be affecting the size of this cookie payload?
Running IdSrv3 2.6.1
I assume that you are using some .NET Framework clients, because all of these problems are usually connected with the Microsoft.Owin middleware, that has some encryption that causes the cookie to get this big.
The solution for you is again part of this middleware. All of your clients (using the Identity Server as authority) need to have a custom IAuthenticationSessionStore imlpementation.
This is an interface, part of Microsoft.Owin.Security.Cookies.
You need to implement it according to whatever store you want to use for it, but basically it has the following structure:
public interface IAuthenticationSessionStore
{
Task RemoveAsync(string key);
Task RenewAsync(string key, AuthenticationTicket ticket);
Task<AuthenticationTicket> RetrieveAsync(string key);
Task<string> StoreAsync(AuthenticationTicket ticket);
}
We ended up implementing a SQL Server store, for the cookies. Here is some example for Redis Implementation, and here is some other with EF DbContext, but don't feel forced to use any of those.
Lets say that you implement MyAuthenticationSessionStore : IAuthenticationSessionStore with all the values that it needs.
Then in your Owin Startup.cs when calling:
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AuthenticationType = "Cookies",
SessionStore = new MyAuthenticationSessionStore()
CookieName = cookieName
});
By this, as the documentation for the IAuthenticationSessionStore SessionStore property says:
// An optional container in which to store the identity across requests. When used,
// only a session identifier is sent to the client. This can be used to mitigate
// potential problems with very large identities.
In your header you will have only the session identifier, and the identity itself, will be read from the Store that you have implemented

Should a RESTful API avoid requiring the client to know the resource hierarchy?

Our API's entry point has a rel named "x:reports" (where x is a prefix defined in the HAL representation, by way of a curie - but that's not important right now).
There are several types of reports. Following "x:report" provides a set of these affordances, each with a rel of its own - one rel is named "x:proofofplay". There is a set of lookup values associated with this type of report (and only this type of report). The representation returned by following "x:proofofplay" has a rel to this set of values "x:artwork".
This results in the following hierarchy
reports
proofofplay
artwork
While the "x:artwork" resource is fairly small, it does take some time to fetch it (10 sec). So the client has opted to async load it at app launch.
In order to get the "x:artwork"'s href the client has to follow the links. I'm not sure whether this is a problem. It seems potentially unRESTful, as the client is depending on out-of-band knowledge of the path to this resource. If ever path to artwork changes (highly unlikely) the client will break (though the hrefs themselves can change with impunity).
To see why I'm concerned, the launch function looks like this:
launch: function () {
var me = this;
Rest.getLinksFromEntryPoint(function(links) {
Rest.getLinksFromHref(links["x:reports"].href, function(reportLinks){
Rest.getLinksFromHref(reportLinks["x:proofofplay"].href, function(popLinks){
me.loadArtworks(popLinks["x:artwork"].href);
});
});
});
}
This hard-coding of the path simultaneously makes me think "that's fine - it's based on a published resource model" and "I bet Roy Fielding will be mad at me".
Is this fine, or is there a better way for a client to safely navigate such a hierarchy?
The HAL answer to this is to embed the resources.
Depending a bit on your server-side technology, this should be good enough in your case because you need all the data to be there before the start of the application, and since you worry about doing this sequentially, you might parallelize this on the server.
Your HAL client should ideally treat things in _links and things in _embedded as the same type of thing, with the exception that in the second case, you are also per-populating the HTTP cache for the resources.
Our js-based client does something like this:
var client = new Client(bookMarkUrl);
var resource = await client
.follow('x:reports')
.follow('x:proofofplay')
.follow('x:artwork')
.get();
If any of these intermediate links are specified in _links, we'll follow the links and do GET requests on demand, but if any appeared in _embedded, the request is skipped and the local cache is used. This has the benefit that in the future we can add new things from _links to _embedded, and speeding up clients who don't have to be aware of this change. It's all seamless.
In the future we intend to switch from HAL's _embedded to use HTTP2 Push instead.