How to send signed manifest to the PTE to interact with a published package - radix

I managed to publish a package to the PTE via resim publish.
Now I am stuck as I have the following problem:
How to send a signed manifest to the PTE (from my account as I need a badge that will be returned)?

In order to create a Manifest, and sign it, you must back an element which upon activation constructs a Manifest, sends it to the PTE Extension to be signed, and receives the results.
Here is some sample code, this is the Typescript part:
document.getElementById('instantiateMainComponent')!.onclick = async function () {
// Construct manifest
const manifest = new ManifestBuilder()
// Instantiates component
.callFunction(Package_Address, 'ComponentName', 'instantiate', [])
// Deposits returned resources to account
.callMethodWithAllResources(Account_Address, 'deposit_batch')
.build()
.toString();
// Send manifest to extension for signing
const receipt = await signTransaction(manifest);
// Add results here
}
And here is my associated HTML:
<h2>3. Instantiate Main Component</h2>
<p><button id="instantiateMainComponent">Instantiate</button></p>

This was one of the things I needed to do not too long ago. To clarify, for the public test environment (only the PTE, nothing else) not all transactions require signatures. Matter of fact, only transactions which withdraw funds from an account require a signature, nothing else requires one. This means the following:
For all transactions which do not withdraw funds from an account, you can continue using the PTE API to send your transactions and not sign them.
If signing transactions is necessary, then my recommendation is to construct and sign your transactions using Rust since it already has an SBOR implementation and you can very easily sign all of your transactions with Rust + the existing Scrypto libraries.
Regarding the Rust implementation, here is an example code I wrote in Rust which signs transactions and submits them to the PTE (PTE01 but you can change it to PTE02): https://github.com/0xOmarA/PTE-programmatic-interactions/blob/main/src/main.rs
Here is an example of this code being used in action: https://github.com/0xOmarA/RaDEX/tree/main/bootstrap

Related

HMS Wallet Kit-Error code -1 is returned when I add passes on the device side

I've downloaded the sample project of the Android client from the HUAWEI Developers official website
After installing it on the mobile phone, I want to add a membership card to HUAWEI Wallet.
The demo provides two methods for adding a pass. The key code is as follows:
public void saveToHuaWeiWallet(View view) {
String jwtStr = getJwtFromAppServer(passObject);
CreateWalletPassRequest request = CreateWalletPassRequest.getBuilder()
.setJwt(jwtStr)
.build();
Log.i("testwalletKIT", "getWalletObjectsClient");
walletObjectsClient = Wallet.getWalletPassClient(PassTestActivity.this);
Task<AutoResolvableForegroundIntentResult> task = walletObjectsClient.createWalletPass(request);
ResolveTaskHelper.excuteTask(task, PassTestActivity.this, SAVE_TO_ANDROID);
}
No matter which method I use for adding a pass, error code -1 is returned. I didn't find any description of the error code in the official documentation. Can anyone tell me why error code -1 is returned?
Parameter error. The possible causes are as follows:
No template is created for the passes. Add a template by referring to the following. https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/wallet-guide-webpage
The pass has already been added. Enter the unique ID (serinumber) of another pass.
The template ID and service number are incorrect. Enter the values of passStyleIdentifier and passTypeIdentifier fields used during template creation on the server side.
Incorrect IssuerId. Enter the app ID generated app creation in AppGallery Connect.

How to secure REST API from replay attacks with parameter manipulation?

I am developing secure payment APIs, and I want to avoid replay attacks with manipulation of the parameters in the url. For example in the following API call:
https://api.payment.com/wallet/transfer?from_account=123&to_account=456&amount=100
Once this API call is executed, someone with enough knowledge can execute the same API call by modifying any of the three parameters to his/her own advantage. I have thought of issuing a temporary token (transaction token) for each transaction. But this also doesn't sounds like enough.
Can anyone suggest the best way to mitigate replay attacks with parameters tampering?
THE API SERVER
I am developing secure payment APIs, and I want to avoid replay attacks with manipulation of the parameters in the url.
Before we dive into addressing your concerns it's important to first clarify a common misconception among developers, that relates to knowing the difference between who vs what is accessing the API server.
The difference between who and what is accessing the API server.
This is discussed in more detail in this article I wrote, where we can read:
The what is the thing making the request to the API server. Is it really a genuine instance of your mobile app, or is it a bot, an automated script or an attacker manually poking around your API server with a tool like Postman?
The who is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
If the quoted text is not enough for you to understand the differences, then please go ahead and read the entire section of the article, because without this being well understood you are prone to apply less effective security measures in your API server and clients.
SECURITY LAYERS AND PARAMETERS IN THE URL
For example in the following API call:
https://api.payment.com/wallet/transfer?from_account=123&to_account=456&amount=100
Security is all about applying as many layers of defence as possible in order to make the attack as harder and laborious as possible, think of it as the many layers in an onion you need to peel to arrive to the center one.
Attackers will always look for the most easy targets, the lower hanging fruit in the tree, because they don't want to resort to use a ladder when they can take the fruit from another tree with lower hanging fruit ;)
So one of the first layers of defense is to avoid using parameters in the url for sensitive calls, thus I would use a POST request with all the parameters in the body of the request, because this type of request cannot be done by simply copy paste the url into the browser or any other tool, thus they require more effort and knowledge to be performed, aka the fruit is more high in the tree for the attacker.
Another reason is that GET requests end up in the logs of the servers, thus can be accidentally exposed and easily replayed.
REPLAY ATTACKS FOR API CALLS
Once this API call is executed, someone with enough knowledge can execute the same API call by modifying any of the three parameters to his/her own advantage.
Yes they can, and they can learn how your API works even if you don't have public documentation for it, they just need to reveres engineer it with the help of any open source tool for mobile apps and web apps.
I have thought of issuing a temporary token (transaction token) for each transaction. But this also doesn't sounds like enough.
Yes it's not enough because this temporary token can be stolen via a MitM attack, just like a show in the article Steal That Api Key With a Man in the Middle Attack:
So, in this article you will learn how to setup and run a MitM attack to intercept https traffic in a mobile device under your control, so that you can steal the API key. Finally, you will see at a high level how MitM attacks can be mitigated.
So after performing the MitM attack to steal the token it's easy to use curl, Postman or any other similar tool to make the requests to the API server just like if you are the genuine who and what the API server expects.
MITIGATE REPLAY ATTACKS
Improving on Existing Security Defence
I have thought of issuing a temporary token (transaction token) for each transaction. But this also doesn't sounds like enough.
This approach is good but not enough as you alreay noticed, but you can improve it, if not have done it already, by making this temporary token usable only one time.
Another important defence measure is to not allow the requests with same amount and same recipients(from_account, to_account) be repeated in sequence, even if they have a new temporary token.
Also don't allow requests from the same source to be made to fast, specially if they are intended to come from human interactions.
This measures on their own will not totally solve the issue, but add some more layers into the onion.
Using HMAC for the One Time Token
In order to try to help the server to be confident about who and what is making the request you can use a Keyed-Hash Message Authentication Code (HMAC) which is designed to prevent hijacking and tampering, and as per Wikipedia:
In cryptography, an HMAC (sometimes expanded as either keyed-hash message authentication code or hash-based message authentication code) is a specific type of message authentication code (MAC) involving a cryptographic hash function and a secret cryptographic key. As with any MAC, it may be used to simultaneously verify both the data integrity and the authenticity of a message.
So you could have the client creating an HMAC token with the request url, user authentication token, your temporary token, and the time stamp that should be also present in a request header. The server would then grab the same data from the request and perform it's own calculation of the HMAC token, and only proceed with the request if it's own result matches the one for the HMAC token header in the request.
For a practical example of this in action you can read part 1 and part 2 of this blog series about API protection techniques in the context of a mobile app, that also features a web app impersonating the mobile app.
So you can see here how the mobile app calculates the HMAC, and here how the Api server calculates and validates it. But you can also see here how the web app fakes the HMAC token to make the API server think that the requests is indeed from who and what it expects to come from, the mobile app.
Mobile App Code::
/**
* Compute an API request HMAC using the given request URL and authorization request header value.
*
* #param context the application context
* #param url the request URL
* #param authHeaderValue the value of the authorization request header
* #return the request HMAC
*/
private fun calculateAPIRequestHMAC(url: URL, authHeaderValue: String): String {
val secret = HMAC_SECRET
var keySpec: SecretKeySpec
// Configure the request HMAC based on the demo stage
when (currentDemoStage) {
DemoStage.API_KEY_PROTECTION, DemoStage.APPROOV_APP_AUTH_PROTECTION -> {
throw IllegalStateException("calculateAPIRequestHMAC() not used in this demo stage")
}
DemoStage.HMAC_STATIC_SECRET_PROTECTION -> {
// Just use the static secret to initialise the key spec for this demo stage
keySpec = SecretKeySpec(Base64.decode(secret, Base64.DEFAULT), "HmacSHA256")
Log.i(TAG, "CALCULATE STATIC HMAC")
}
DemoStage.HMAC_DYNAMIC_SECRET_PROTECTION -> {
Log.i(TAG, "CALCULATE DYNAMIC HMAC")
// Obfuscate the static secret to produce a dynamic secret to initialise the key
// spec for this demo stage
val obfuscatedSecretData = Base64.decode(secret, Base64.DEFAULT)
val shipFastAPIKeyData = loadShipFastAPIKey().toByteArray(Charsets.UTF_8)
for (i in 0 until minOf(obfuscatedSecretData.size, shipFastAPIKeyData.size)) {
obfuscatedSecretData[i] = (obfuscatedSecretData[i].toInt() xor shipFastAPIKeyData[i].toInt()).toByte()
}
val obfuscatedSecret = Base64.encode(obfuscatedSecretData, Base64.DEFAULT)
keySpec = SecretKeySpec(Base64.decode(obfuscatedSecret, Base64.DEFAULT), "HmacSHA256")
}
}
Log.i(TAG, "protocol: ${url.protocol}")
Log.i(TAG, "host: ${url.host}")
Log.i(TAG, "path: ${url.path}")
Log.i(TAG, "Authentication: $authHeaderValue")
// Compute the request HMAC using the HMAC SHA-256 algorithm
val hmac = Mac.getInstance("HmacSHA256")
hmac.init(keySpec)
hmac.update(url.protocol.toByteArray(Charsets.UTF_8))
hmac.update(url.host.toByteArray(Charsets.UTF_8))
hmac.update(url.path.toByteArray(Charsets.UTF_8))
hmac.update(authHeaderValue.toByteArray(Charsets.UTF_8))
return hmac.doFinal().toHex()
}
API server code:
if (DEMO.CURRENT_STAGE == DEMO.STAGES.HMAC_STATIC_SECRET_PROTECTION) {
// Just use the static secret during HMAC verification for this demo stage
hmac = crypto.createHmac('sha256', base64_decoded_hmac_secret)
log.info('---> VALIDATING STATIC HMAC <---')
} else if (DEMO.CURRENT_STAGE == DEMO.STAGES.HMAC_DYNAMIC_SECRET_PROTECTION) {
log.info('---> VALIDATING DYNAMIC HMAC <---')
// Obfuscate the static secret to produce a dynamic secret to use during HMAC
// verification for this demo stage
let obfuscatedSecretData = base64_decoded_hmac_secret
let shipFastAPIKeyData = new Buffer(config.SHIPFAST_API_KEY)
for (let i = 0; i < Math.min(obfuscatedSecretData.length, shipFastAPIKeyData.length); i++) {
obfuscatedSecretData[i] ^= shipFastAPIKeyData[i]
}
let obfuscatedSecret = new Buffer(obfuscatedSecretData).toString('base64')
hmac = crypto.createHmac('sha256', Buffer.from(obfuscatedSecret, 'base64'))
}
let requestProtocol
if (config.SHIPFAST_SERVER_BEHIND_PROXY) {
requestProtocol = req.get(config.SHIPFAST_REQUEST_PROXY_PROTOCOL_HEADER)
} else {
requestProtocol = req.protocol
}
log.info("protocol: " + requestProtocol)
log.info("host: " + req.hostname)
log.info("originalUrl: " + req.originalUrl)
log.info("Authorization: " + req.get('Authorization'))
// Compute the request HMAC using the HMAC SHA-256 algorithm
hmac.update(requestProtocol)
hmac.update(req.hostname)
hmac.update(req.originalUrl)
hmac.update(req.get('Authorization'))
let ourShipFastHMAC = hmac.digest('hex')
// Check to see if our HMAC matches the one sent in the request header
// and send an error response if it doesn't
if (ourShipFastHMAC != requestShipFastHMAC) {
log.error("\tShipFast HMAC invalid: received " + requestShipFastHMAC
+ " but should be " + ourShipFastHMAC)
res.status(403).send()
return
}
log.success("\nValid HMAC.")
Web APP code:
function computeHMAC(url, idToken) {
if (currentDemoStage == DEMO_STAGE.HMAC_STATIC_SECRET_PROTECTION
|| currentDemoStage == DEMO_STAGE.HMAC_DYNAMIC_SECRET_PROTECTION) {
var hmacSecret
if (currentDemoStage == DEMO_STAGE.HMAC_STATIC_SECRET_PROTECTION) {
// Just use the static secret in the HMAC for this demo stage
hmacSecret = HMAC_SECRET
}
else if (currentDemoStage == DEMO_STAGE.HMAC_DYNAMIC_SECRET_PROTECTION) {
// Obfuscate the static secret to produce a dynamic secret to
// use in the HMAC for this demo stage
var staticSecret = HMAC_SECRET
var dynamicSecret = CryptoJS.enc.Base64.parse(staticSecret)
var shipFastAPIKey = CryptoJS.enc.Utf8.parse($("#shipfast-api-key-input").val())
for (var i = 0; i < Math.min(dynamicSecret.words.length, shipFastAPIKey.words.length); i++) {
dynamicSecret.words[i] ^= shipFastAPIKey.words[i]
}
dynamicSecret = CryptoJS.enc.Base64.stringify(dynamicSecret)
hmacSecret = dynamicSecret
}
if (hmacSecret) {
var parser = document.createElement('a')
parser.href = url
var msg = parser.protocol.substring(0, parser.protocol.length - 1)
+ parser.hostname + parser.pathname + idToken
var hmac = CryptoJS.HmacSHA256(msg, CryptoJS.enc.Base64.parse(hmacSecret)).toString(CryptoJS.enc.Hex)
return hmac
}
}
return null
}
NOTE: While the above code is not using the exact same parameters that you would use in your case, it is a good starting pointing for you to understand the basics of it.
As you can see the way the HMAC token is calculated across mobile app, Api server and the Web app are identical in the semantics of the logic, thus resulting in the same HMAC token, and this way the Web app is able to defeat the Api server defense to only accept valid request from the mobile app.
The bottom line here is that anything you place in the client code can be reverse engineered in order to replicate it in another client. So should I use HMAC tokens in my use case?
Yes, because it's one more layer in the onion or a fruit more high in the tree.
Can I do better?
Yes you can do, just keep reading...
Enhance and Strength the Security
Can anyone suggest the best way to mitigate replay attacks with parameters tampering?
Going with the layered defence approach once more, you should look to other layered approaches that will allow your API server to be more confident about who and waht is accessing it.
So if the clients of you API server are only mobile apps, then please read this answer for the question How to secure an API REST for mobile app?.
In the case you need to secure an API that serves both a mobile and web app, then see this another answer for the question Unauthorized API Calls - Secure and allow only registered Frontend app.
GOING THE EXTRA MILE
Now I would like to recommend you the excellent work of the OWASP foundation:
The Web Security Testing Guide:
The OWASP Web Security Testing Guide includes a "best practice" penetration testing framework which users can implement in their own organizations and a "low level" penetration testing guide that describes techniques for testing most common web application and web service security issues.
The Mobile Security Testing Guide:
The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.

issues deploying solidity smart contract to rinkeby test network

I'm using openZeppelin to make a crowdsale contract, all (30 of them) my tests pass with flying colours ;) and I can migrate on a locall ganache blockchain no problem.
When I try to deploy to rinkeby I start having issues. My config in truffle.js is
rinkeby: {
provider: rinkeybyProvider,
network_id: 3,
gas: 4712388,
gasPrice: web3.utils.toWei("40", "gwei"),
websockets: true,
from: "0x9793371e69ed67284a1xxxx"
}
When I deploy on rinkeby I get:
"SplitWallet" hit a require or revert statement somewhere in its
constructor. Try: * Verifying that your constructor params satisfy
all require conditions. * Adding reason strings to your require
statements.
I have gone through and put messages in every revert in the constructor hierachy, but I never see any of the messages. I thought it might be that my payees and shares were different lengths but, no, they are the same (only parameters that the constructor for a splitwallet take)
Things to note:
I have an infura api key
I am using truffle-wallet-provider provider, with just a private key (no mnemonic) to deploy
I am confused (due to the above), how my deploy script, can know multiple (10) wallets on deployment. Usually (in ganache) these are the 10 wallets ganache generates for you, but here, I am providing a private key, so it shouldn't be able to know 10 wallets, just one - the public key of the private key that is deploying the contract, no? (talking about here):
module.exports = async (
deployer,
network,
[owner, purchaser, investor, organisation, ...accounts] //how does it know these??
)
This last point, makes me wonder, because I printed out owner/purchaser and they dont match my public key wallet at all, so I have no idea where they are coming from. And if they dont match, and it defaults to the owner being accounts[0], then that wallet may not be able to pay for the gas.... perhaps??
Thanks
Rinkeby network id is 4, not 3.

Facebook Messenger Bot Proactive/Push Notifications using Azure

I am building a bot for for Facebook Messenger using Microsoft Bot Framework. I am planning to use CosmosDB for State Management and also as my backend data store. (I am not stuck to CosmosBD and can use any other store if needed)
I need to send daily/weekly proactive messages(push notifications) to users based on their time preference. I will capturing their time preference when they first interact with the bot.
What is the best way to deliver these notifications?
As I will be storing these preferences in CosmosDB, I am thinking using ComosDB trigger of creating an Azure Function and schedule it based on the user time preference. This Azure function will make a call to my webhook which will deliver these messages. If requried, I will change Function schedule when a user changes his/her preference.
My questions are:
Is this a good approach?
Are there any other alternatives (Notifications Hub?)
I should be able to set specific times for notifications (like at the top of the hour or something like that), does it make sense to schedule an Azure Function to run at these hours rather than creating a function based on user preference (I can actually combine these two approaches too)
Thank you in advance.
First, I don't think there's any "right" answer to be given here; it's going to depend a lot on your domain's specific needs. Scale is going to play a major factor in the design of this. Will you have 100 users? 10000 users? 1mil users? I'm going to assume you want to design for maximum scale up front, but it could be overkill.
First, based on what you've described, I don't think a CosmosDB trigger is necessarily the solution to your problem because that's only going to fire when the preference data is created/updated. I assume that, from that point forward, your function needs to continuously fire at the time slot they've opted into, correct?
So let's pretend you let people choose from the 24hrs in the day. A naïve approach would be to simply use a scheduled trigger that fires up every hour, queries the CosmosDB for all the documents where the preference is set to that particular hour and then begins sending out notifications from there. The problem is how you scale from there and deal with issues of idempotency in the face of failures.
First off, a timer trigger only ever spins up one instance. If you were to just go query the CosmosDB documents and start processing them one by one in the scope of that single trigger, you'd hit a ceiling relatively quickly on how many notifications you can scale to. Instead what you'd want to do is use that timer trigger to fan out the notifications to as many "worker" function instances as possible. The timer trigger can act as the orchestrator in the sense that it can own the query against the CosmosDB and then turn each document result it finds for that particular notification time window into a message that it places on a queue to be processed by a separate function which will scale out on its own.
There are actually a couple ways you can accomplish this with Azure Functions, it really depends on how early an adopter of technology you are comfortable with being.
The first is what I would call the "manual" way which would be done by simply using the existing Azure Storage Queue extension by taking an IAsyncCollector<YourNotificationWorkerMessage> as a parameter to the timer function that's bound to the worker queue and then pumping out the messages through that. Then you write a second companion function which uses a QueueTrigger, bind it to that same queue, and it will take care of processing each message. This second function is where you get the scaling, enabling process all of the queued messages as quickly as possible based on whatever scaling parameters you choose to configure. This is the "simplest" approach
The second approach would be to adopt the newer Durable Functions extension. With that model, you don't have to directly think about creating a worker queue. You simply kick off a new instance of your orchestrator function from the timer function and the orchestrator fans out the work by invoking N "concurrent" calls to an action for each notification. Now, it happens to distribute those calls using queues under the covers, but that's an implementation detail that you need no longer maintain yourself. Additionally, if the work of delivering the notification requires more involved work and/or retry logic, you might actually consider using a sub-orchestration instead of a simple action. Finally, another added benefit of this approach, is that you can "fan back in" to your main orchestrator function once all the notifications are delivered to do some follow up work... even if that's simply some kind of event logging that the notification cycle has completed for this hour.
Now, the challenge with either of these approach is actually dealing with failure in initially fetching the candidates for notification from CosmosDB, paging through the results and making sure you actually fan all of them out in an idempotent manner. You need to deal with possible hiccups as you page and you need to deal with the fact that your whole function could be torn down and you might have to restart. Perhaps on the initial run of the 8AM notifications you got through page 273 out of 371 pages and then you got hit with a complete network connectivity fail or the VM your function was running on suffered a power failure. You could resume, but you'd need to know that you left off on page 273 and that you actually processed the 27th record out of that page and start from there. Otherwise, you risk sending double notifications to your users. Maybe that's something you can accept, maybe it's not. Maybe you're ok with the 27 notifications on that page being duplicated as long as the first 272 pages aren't. Again, this is something you need to decide for your domain, but if you want to avoid this issue your orchestrator function will need to track its progress to ensure that it doesn't send out dupes. Again I would say Durable Functions has a leg up here as it comes with the ability to configure retries. Maintaining the state of a particular run is left up to the author in either approach though.
I use pro-active dialog extensively with botframwork and messenger without any issue. During your facebook approval process you simply need to inform them you will be sending notifications trough messenger with your bot. Usually if you use it to inform your user and stay away from promotional content you should be fine.
I also use azure function to trigger the pro-active dialog from a custom controller endpoint.
Bellow sample code for azure function:
public static void Run(TimerInfo notificationTrigger, TraceWriter log)
{
try
{
//Serialize request object
string timerInfo = JsonConvert.SerializeObject(notificationTrigger);
//Create a request for bot service with security token
HttpRequestMessage hrm = new HttpRequestMessage()
{
Method = HttpMethod.Post,
RequestUri = new Uri(NotificationEndPointUrl),
Content = new StringContent(timerInfo, Encoding.UTF8, "application/json")
};
hrm.Headers.Add("Authorization", NotificationApiKey);
log.Info(JsonConvert.SerializeObject(hrm));
//Call service
using (var client = new HttpClient())
{
Task task = client.SendAsync(hrm).ContinueWith((taskResponse) =>
{
HttpResponseMessage result = taskResponse.Result;
var jsonString = result.Content.ReadAsStringAsync();
jsonString.Wait();
if (result.StatusCode != System.Net.HttpStatusCode.OK)
{
//Throw what ever problem as an exception with details
throw new Exception($"AzureFunction - ERRROR - HTTP {result.StatusCode}");
}
});
task.Wait();
}
}
catch (Exception ex)
{
//TODO log
}
}
Bellow sample code for starting the pro-active dialog:
public static async Task Resume<T, R>(string resumptionCookie) where T : IDialog<R>, new()
{
//Deserialize reference to conversation
ConversationReference conversationReference = JsonConvert.DeserializeObject<ConversationReference>(resumptionCookie);
//Generate message from bot to user
var message = conversationReference.GetPostToBotMessage();
var builder = new ContainerBuilder();
using (var scope = DialogModule.BeginLifetimeScope(Conversation.Container, message))
{
//From a cold start the service is not yet authenticated with dev bot azure services
//We thus must trust endpoint url.
if (!MicrosoftAppCredentials.IsTrustedServiceUrl(message.ServiceUrl))
{
MicrosoftAppCredentials.TrustServiceUrl(message.ServiceUrl, DateTime.MaxValue);
}
var botData = scope.Resolve<IBotData>();
await botData.LoadAsync(CancellationToken.None);
//This is our dialog stack
var task = scope.Resolve<IDialogTask>();
T dialog = scope.Resolve<T>(); //Resolve the dialog using autofac
try
{
task.Call(dialog.Void<R, IMessageActivity>(), null);
await task.PollAsync(CancellationToken.None);
}
catch (Exception ex)
{
//TODO log
}
finally
{
//flush dialog stack
await botData.FlushAsync(CancellationToken.None);
}
}
}
Your dialog needs to be registered in autofac.
Your resumptionCookie needs to be saved in your db.
You might want to check FB policy regarding proactive messages
There’s a 24h limit but it might not be totally screwed in your case
https://developers.facebook.com/docs/messenger-platform/policy/policy-overview#standard_messaging

Paypal IPN currency & response

I am using a paypal ipn script i found here
http://coderzone.org/library/PHP-PayPal-Instant-Payment-Notification-IPN_1099.htm
I am aware that I can send information to paypal and get a response. It states I can get the information back using $_POST . My query is how do I specify the UK currency?
Also wanted to clarify a minor point. Am I correct that this is how i can confirm it was a success.
if ($_POST['payment_status'] == 'completed')
// Received Payment!
// $_POST['custom'] is order id and has been paid for.
}
This might be a little late for you sorry, but just in case - I currently use "currencyCode" = > "AUD" and it is working in the sandbox.
There's a full list of the currency codes available at PayPal
For yours, I'm guessing it would be:
$p->add_field('currencyCode', 'GBP');
As for your question about the IPN itself, it looks like you're on the right track. It will depend on the data you're getting back and whether you're interested in the individual transactions (if using adaptive payments) or if you're reversing them all on error etc. The easiest way to determine what you'll need to do is to simply display or log all the post data so you can see how it's constructed.
You'll also need to set it up so that the script is accessible by PayPal. You'll then pass the full URL of this script to the "notify_url" parameter and send it off to PayPal. Once the payment has completed PayPal will send a bunch of information to your script so that you can process it.
Unfortunately I'm not from a PHP background so I can't give you the exact code you'll need. Also note that there are a lot of security issues that you'll want to look into before going to a production environment. Not sure if you already intend to do this with that validateIPN function, but you need to ensure that you can tell whether it comes from PayPal and not a malicious user. One way would be to pass a value using the custom attribute and have PayPal pass this back to you, however you'd be much better off using the API certificates etc.
If you haven't already, it may be worth checking out a few of the sample applications PayPal has done up, there seem to be quite a few PHP ones.
Let me know if you need anything else,
Use this, it works for me
$p->add_field('currency_code', 'GBP');
You need to use PayPal Adaptive Payments, IPN wouldn't help.
PayPal Adaptive Payments
Using PayPal PHP library then it could look like this:
// Create an instance, you'll make all the necessary requests through this
// object, if you digged through the code, you'll notice an AdaptivePaymentsProxy class
// wich has in it all of the classes corresponding to every object mentioned on the
// documentation of the API
$ap = new AdaptivePayments();
// Our request envelope
$requestEnvelope = new RequestEnvelope();
$requestEnvelope->detailLevel = 0;
$requestEnvelope->errorLanguage = 'en_GB';
// Our base amount, in other words the currency we want to convert to
// other currency type. It's very straighforward, just have a public
// prop. to hold de amount and the current code.
$baseAmountList = new CurrencyList();
$baseAmountList->currency = array( 'amount' => $this->amount, 'code' => 'GBP' );
// Our target currency type. Given that I'm from Mexico I would like to
// see it in mexican pesos. Again, just need to provide the code of the
// currency. On the docs you'll have access to the complete list of codes
$convertToCurrencyListUSD = new CurrencyCodeList();
$convertToCurrencyListUSD->currencyCode = 'USD';
// Now create a instance of the ConvertCurrencyRequest object, which is
// the one necessary to handle this request.
// This object takes as parameters the ones we previously created, which
// are our base currency, our target currency, and the req. envelop
$ccReq = new ConvertCurrencyRequest();
$ccReq->baseAmountList = $baseAmountList;
$ccReq->convertToCurrencyList = $convertToCurrencyListUSD;
$ccReq->requestEnvelope = $requestEnvelope;
// And finally we call the ConvertCurrency method on our AdaptivePayment object,
// and assign whatever result we get to our variable
$resultUSD = $ap->ConvertCurrency($ccReq);
$convertToCurrencyListUSD->currencyCode = 'EUR';
$resultEUR = $ap->ConvertCurrency($ccReq);
// Given that our result should be a ConvertCurrencyResponse object, we can
// look into its properties for further display/processing purposes
$resultingCurrencyListUSD = $resultUSD->estimatedAmountTable->currencyConversionList;
$resultingCurrencyListEUR = $resultEUR->estimatedAmountTable->currencyConversionList;