When using oauth to ask permission for Facebook app, does not take in consideration what's in scope - facebook

So basically I need my app to allow different permission so I can test it.
But when using the link
"https://www.facebook.com/dialog/oauth?client_id=MYID&redirect_uri=http%3A%2F%2Fwww.facebook.com%2Fconnect%2Flogin_success.html&scope=user_friends, publish_actions,friends_birthday"
It only asks for public information access and not the rest.
I have tried many different ways with the link, I can manage it to ask for the email if I put it first after scope, but the rest it does not request it.
I would very much appreciate your help.
My application works with a .sh script in Ubuntu, and needs the permission to run the script on my Facebook account.
I don't require the app to be used by anyone else in the future, this is why I limited myself to asking permission with this kind of link instead of doing some interface.
I don't require the app to be used by anyone else in the future, this is why I limited myself to asking permission with this kind of link instead of doing some interface.
This is the script I would like to run via the APP, so I have to allow application to access permissions accordingly. I don't know if the script work yet:
#!/bin/bash
NOW=$(date +"%m/%d")
#Enter your access token below within quotations
ACCESS_TOKEN="TOKEN"
#Get the names and UIDs of your unfortunate friends born on a day like today
curl "https://api.facebook.com/method/fql.query?access_token=$ACCESS_TOKEN&query=SELECT%20first_name,uid%20from%20user%20where%20uid%20in%20(SELECT%20uid2%20from%20friend%20where%20uid1=MYUSERID)AND%20substr(birthday_date,0,5)%20==%20'$NOW'" > birthdaywishtemp.xml
names=`sed -n -e 's/.*<first_name>\(.*\)<\/first_name>.*/\1/p' birthdaywishtemp.xml`
ids=`sed -n -e 's/.*<uid>\(.*\)<\/uid>.*/\1/p' birthdaywishtemp.xml`
F_ARRAY=( `echo ${names}` )
U_ARRAY=( `echo ${ids}` )
#Wish each of them with the same old boring message
for (( i = 0 ; i < ${#U_ARRAY[#]} ; i++ ))
do
curl 'https://graph.facebook.com/${U_ARRAY[$i]}'
curl -F 'access_token='$ACCESS_TOKEN'' \
-F 'message=Hey, Happy Birthday...'${F_ARRAY[$i]} \
https://graph.facebook.com/${U_ARRAY[$i]}/feed >> birthdaywishbackup.log
#Let you know the progress
echo "Wished ${F_ARRAY[$i]}" >> birthdaywishbackup.log
done

Related

determine required permissions for AWS CDK

I'm working with AWS CDK and every time I go to create a new resource (CodePipeline, VPC, etc) I end up in the same loop of...
try to deploy
"you are not authorized to foo:CreateBar"
update IAM permissions
try to deploy
"you are not authorized to baz:CreateZzz"
update IAM permissions
...over and over again. Then the same when I cdk destroy, but for "foo:DeleteFoo"
Is there a more efficient way to determine what permissions a policy needs to perform a certain CDK action? Maybe somewhere in the documentation I can reference?
Thanks
Here is a script that will execute whatever you pass to it but will also capture the timestamps between what you passed it and when it finished executing and will print all the AWS API Events captured by the configured default aws user using cloudtrail. It can take like 20 minutes for the actions to show up in cloudtrail but the script will check every minute until it gets results for that time range. If no AWS api calls are made during the time range then no results will ever be returned. It's a simple script, there is no max timeout or anything.
#!/bin/bash -x
user_name=`aws sts get-caller-identity | jq -r '.Arn' | sed -e 's/user\// /g' | awk '{print $2}'`
sleep 5 # Sleep to avoid getting the sts call in our time range
start_time=`date`
sleep 1 # Sleep to avoid millisecond rounding issues
eval $#
sleep 1 # Sleep to avoid millisecond rounding issues
end_time=`date`
actions=""
while [ -z "$actions" ]; do
sleep 60
echo "Checking for events from $start_time to $end_time..."
actions=`aws cloudtrail lookup-events --lookup-attributes AttributeKey=Username,AttributeValue=${user_name} --start-time "${start_time}" --end-time "${end_time}" | jq -r '.Events[].CloudTrailEvent' | jq -s | jq -r '.[] | "\(.eventSource) \(.eventName)"' | sed -e 's/.amazonaws.com /:/g' | sed -e 's/[0-9]//g' | sort | uniq`
done
echo "AWS Actions Used:"
echo "$actions"
I call it get-aws-actions.sh and it requires the aws cli to be installed as well as jq. For cdk I would use it like this
./get-aws-actions.sh "cdk deploy && cdk destroy"
I'd have my admin level credentials configured as the default profile so I know the deployment will not fail because of permission issues then I use the returned results from this script to give permissions to a more specific deployment user/role for long term use. The problem you can run into is the first time you may only see a bunch of :Create* or :Add* actions but really you'll need to add all the lifecycle actions for the ones you see. So if you see dynamodb:CreateTable you'll want to make sure you also add UpdateTable and DeleteTable. If you see s3:PutBucketPolicy you'll also want s3:DeleteBucketPolicy.
To be honest, any services that don't deal with API calls that allow access to data, I will just do <service>:*. An example might be ECS. I can't use ECS API calls to call an API do anything to a container that CloudFormation won't need to do to manage the service. So for that service if I knew I was doing containers I'd just grant ecs:* on * to my deployer role. A service like s3, lambda, sqs, sns where there is data access as well as resource creation access through an API I'll need to be more deliberate with the permissions granted. My deployer role shouldn't have access to read all the data off all buckets or execute functions but it does need to create buckets and functions.

Owncloud Calendar ICS Backup

I wanted to have a regular backup of my Owncloud calendars as ICS files, in case the server runs into a problem that I don't have time to fix right away. For this purpose I wrote a little script, which can be run as a cronjob.
Any feedback, improvements, alterations are welcome!
I have been using this script for quite a while. It was a big help in having a backup for calendars and contacts from my onwCloud installation. Thanks!
However, one thing really bugged me with the script of envyrus: new calendars/addressbooks need to be shared manually with the „backup-user“, whose calendars will be backed up. This made the script basically useless for me, because my wife is creating and deleting her calendars and task-lists quite often.
There is a script which can automatically deal with additionally created/deleted calendars, since it fetches all data from the database and not via http-request (like the script from envyrus). It just creates a backup of every single calendar/addressbook existing in the database. Giving a username/password combination is not necessary when using this script. Also there is no need to share calendars to be backed up with a certain user. Last but not least, the script doesn‘t require root privileges.
From the scripts‘ README:
This Bash script exports calendars and addressbooks from
ownCloud/Nextcloud to .ics and .vcf files and saves them to a
compressed file. Additional options are available.
Starting with version 0.8.0, there is no need anymore for a file with
user credentials because all data is fetched directly from the
database. If only calendars/addressbooks of certain users shall be
backed up, list them in users.txt without any passwords.
Maybe this is also a help for others: calcardbackup
DISCLAIMER: I created this script for a little Owncloud instance that I run for myself and 1-2 other friends - it is not meant for any "serious business", so to speak. I used the scripts from this and this site as a starting point - thank you!
To create ics backups of all the user calendars, I created an Owncloud user called "calendarBackup", who other users can share their calendars with. I wrote a little script, that loops through all those calendars and downloads the ics files. They are then put into a shared folder owned by the calendarBackup, and the backup is distributed across users. (An easy adjustment could be made, so that each user gets his own calendar files.)
The advantage to this approach is that the script doesn't need to know all the user passwords.
Here the code:
#!/bin/bash
#owncloud login data for calendar backup user
OCuser=owncloudUserName
OCpassword="owncloudUserPassword"
OCpath="/var/www/owncloud/"
OCbaseURL="https://localhost/owncloud/"
OCdatabase="owncloudDatabaseName"
#destination folder for calendar backups
dest="/var/www/owncloud/data/owncloudUserName/files/Backup/"
#mysql user data with access to owncloud database
MSQLuser=owncloudMysqlUser
MSQLpassword="owncloudMysqlUserPassword"
#timestamp used as backup name
timeStamp=$(date +%Y%m%d%H%M%S)
archivePassword="passwordForArchivedCalendars"
#apachee user and group
apacheUser="apacheUser"
apacheGroup="apacheGroup"
#create folder for new backup files
mkdir "$dest$timeStamp"
#create array of calendar names from Owncloud database query
calendars=($(mysql -B -N -u $MSQLuser -p$MSQLpassword -e "SELECT uri FROM $OCdatabase.oc_calendars"))
calendarCount=${#calendars[#]}
#create array of calendar owners from Owncloud database query
owners=($(mysql -B -N -u $MSQLuser -p$MSQLpassword -e "SELECT principaluri FROM $OCdatabase.oc_calendars"))
loopCount=0
#loop through all calendars
while [ $loopCount -lt $calendarCount ]
do
#see if owner starts with "principals/users/"
#(this part of the script assumes that principaluri for normal users looks like this: principal/users/USERNAME )
if [ "${owners[$loopCount]:0:17}" = "principals/users/" ]
then
#concatenate download url
url=$OCbaseURL"remote.php/dav/calendars/$OCuser/${calendars[$loopCount]}_shared_by_${owners[$loopCount]:17}?export"
#echo $url
#download the ics files (if download fails, delete file)
wget \
--output-document="$dest$timeStamp/${owners[$loopCount]:17}${calendars[$loopCount]}.ics" \
--no-check-certificate --auth-no-challenge \
--http-user=$OCuser --http-password="$OCpassword" \
"$url" || rm "$dest$timeStamp/${owners[$loopCount]:17}${calendars[$loopCount]}.ics"
#echo ${owners[$loopCount]:17}
fi
#echo "${calendars[$loopCount]} ${owners[$loopCount]}"
loopCount=$(($loopCount + 1))
done
#zip backed up ics files and remove the folder (this could easily be left out, change the chown command though)
zip -r -m -j -P $archivePassword "$dest$timeStamp" "$dest$timeStamp"
rm -R $dest$timeStamp
#chown needed so owncloud can access backup file
chown $apacheUser:$apacheGroup "$dest$timeStamp.zip"
#update owncloud database of calendar backup user
sudo -u $apacheUser php "$OCpath"occ files:scan $OCuser
A few notes on the script:
It is written for a Debian shell.
It works for Owncloud 9.1 with Mysql.
It assumes the download URL for a shared calendar looks like this:
OwncloudURL/remote.php/dav/calendars/LoggedInOwncloudUser/CalendarName_shared_by_CalendarOwner?export
To check for the correct URL, simply download a shared calendar in the web interface and check the download URL.
It assumes that the calendar names are stored in the column "uri" of the table "oc_calendars".
It assumes that the calendar owner is stored in the column "principaluri" of the table "oc_calendars" and that all normal users are prefixed with "principals/users/".
It needs sudo permission to update Owncloud file structure.
It needs zip to be installed.

Fetch a specific ID using Rally REST and curl

I am new to rally REST. I have used curl/perl/REST api for RTC, so am familiar with it but learned mostly by using examples. I need to be able to fetch a specific ID and the Name associated with it accessing it through curl and perl scripting. For example DE46835 Name:This is my defect. I haven't found any examples to fetch just a known ID. Can you point me to any documentation for this or provide an example how to do this.
If you know the FormattedID you should be able to query using a URL like this:
https://rally1.rallydev.com/slm/webservice/v2.0/defect?query=(FormattedID = "DE46835")
The full web service api documentation is available here:
https://rally1.rallydev.com/slm/doc/webservice/
Thank you for the answer. I was able to figure it out and get it to work.
Here is what I did.
$cmd = "curl -k -u \"${user}:${password}\" \"${url}\" -c ${cookies} -o ${auth}";
system($cmd);
Then
$cmd = "curl -k \"$url\" -o ${return} -b ${cookies}";
where $url=https://rally1.rallydev.com/slm/webservice/v2.0/defect?query=(FormattedID = "DE46835")

Curl command uploading document fails when run from Perl

I've got a Perl script that uploads documents into Alfresco using curl.
Some of the documents have ampersand in the file name and initially this caused curl to fail. I fixed this by placing a carat symbol in front of the ampersand. But now I'm finding some documents are failing to upload when they don't have a space either side of the ampersand. Other documents with spaces in the file name and an ampersand do load successfully.
The snippet of Perl code that is running is:
# Escape & for curl in file name with a ^
my $downloadFileNameEsc = ${downloadfile};
$downloadFileNameEsc =~ s/&/^&/g;
$command = "curl -u admin:admin -F file=\#${downloadFileNameEsc} -F id=\"${docId}\" -F title=\"${docTitle}\" -F tags=\"$catTagStr\" -F abstract=\"${abstract}\" -F published=\"${publishedDate}\" -F pubId=\"${pubId}\" -F pubName=\"${pubName}\" -F modified=\"${modifiedDate}\" -F archived=\"${archived}\" -F expiry=\"${expiryDate}\" -F groupIds=\"${groupIdStr}\" -F groupNames=\"${groupNameStr}\" ${docLoadUrl}";
logmsg(4, $command);
my #cmdOutput = `$command`;
$exitStatus = $?;
my $upload = 0;
logmsg(4, "Alfresco upload status $exitStatus");
if ($exitStatus != 0) {
You can see that I am using backticks to execute the curl command so that I can read the response. The perl script is being run under windows.
What this effectively tries to run is:
curl -u admin:admin -F file=#tmp-download/Multiple%20Trusts%20Gift%20^&%20Loan.pdf -F id="e2ef104d-b4be-4896-8360-7d6f2e7c7b72" ....
This works.
curl -u admin:admin -F file=#tmp-download/Quarterly_Buys^&sells_Q1_2006.doc -F id="78d18634-ee93-4c29-b01d-270aeee3219a" ....
This fails!!
The only difference being as far as I can see is that in the one that works the file name has spaces (%26) in the file name somewhere around the ampersand, not necessarily next to the ampersand.
I can't see why one runs successfully and the other doesn't. Think it must be to do with backticks and ampersands in the file name. I haven't tried using system as I wanted to capture the response.
Any thoughts because I've exhausted all options.
You should learn to use Perl modules. Perl has some great modules to handle the Web requests. If you depend upon operating system commands, you will end up with not only dependencies upon those commands, but shell interactions and whether or not you need to quote special characters.
Perl modules remove a lot of the issues that you can run into. You are no longer dependent upon particular commands or even particular implementation of those commands. (The curl command can vary from system to system, and may not even be on the system you're on). Plus, most of these modules handle the piddling details for you (such as URI escaping strings).
LWP is the standard Perl library for implementing these requests. Take a look at the LWP Cookbook. This is a tutorial on the whole HTTP process. Basically, you need to create an agent which is really just a virtual web browser for you to use. Then, you can configure it (for example, setting the machine, browser type, etc.) you might need.
What is really nice is HTTP::Request::Common that provides a simple interface for using HTTP forms.
my $results = POST "$docLoadUrl"
[ file => '#' . "$downloadFileName",
id => $docId,
title => $docTitle,
tag => $catTagStr,
abstract => $abstract,
published => $publishedDate,
pubId => $pubId,
pubName => $pubName,
...
];
This is a lot easier to read and maintain. Plus, it will handle URI encoding for you.

cURL example of posting photos remotely to Facebook?

I am currently having a problem, where I try to post a picture to Facebook and I get an "error" response:
"Requires upload file", with OAuth exception #324.
I have the access token in there just fine and I can adapt my code from a cURL example relatively easily. All the examples I can find show how to do it in PHP (which I don't know) or something of the like. Any help with an example of how to upload a photo just from the cURL command line tool would be greatly appreciated.
I just can't find what I am looking for anywhere for the life of me.
Are you appending with the # character?
curl -F 'access_token=xxx' \
-F 'source=#img.jpg' \
-F 'message=Test'
'https://graph.facebook.com/me/photos'