How do I get a shared value in Fastlane? - fastlane

I use the changelog_from_git_commits Fastlane action to gather a changelog from the commit logs. The output ends up in a FL_CHANGELOG variable:
Actions.lane_context[SharedValues::FL_CHANGELOG] = changelog
Now I would like to post the changelog along with my Slack notification:
slack(payload: { 'Changelog' => WHAT_DO_I_WRITE_HERE? })
What do I write there? ENV["FL_CHANGELOG"] didn’t work.

This worked:
payload: { 'Changelog' => Actions.lane_context[SharedValues::FL_CHANGELOG] })

Related

How to polylfill Buffer for jsonwebtoken in Wepack 5

I am upgrading to Webpack 5 and I have an issue with the package jsonwebtoken (https://github.com/auth0/node-jsonwebtoken) that needs Buffer (at https://github.com/auth0/node-jsonwebtoken/blob/master/sign.js#L91)
Since Webpack 5 polyfills are not included for nodejs functions and wen I try to use the function sign from jsonwebtoken it throws the following error :
message: "Buffer is not defined"
stack: "ReferenceError: Buffer is not defined↵
at module.exports (webpack-internal:///./node_modules/jsonwebtoken/sign.js:91:26)↵
To solve the issue I installed https://github.com/feross/buffer with
npm install buffer
and in my webpack config I added
resolve: {
fallback: {
"Buffer": require.resolve('buffer/'),
}
or
resolve: {
fallback: {
"buffer": require.resolve('buffer/'),
}
I also tried
resolve: {
fallback: {
"buffer": require.resolve('buffer/').Buffer,
}
But this last one produce a Webpack schema error :
configuration.resolve.fallback['Buffer'] should be one of these:
[non-empty string, ...] | false | non-empty string
-> New request.
Details:
* configuration.resolve.fallback['Buffer'] should be an array:
[non-empty string, ...]
-> Multiple alternative requests.
* configuration.resolve.fallback['Buffer'] should be false.
-> Ignore request (replace with empty module).
* configuration.resolve.fallback['Buffer'] should be a non-empty string.
-> New request.
at validate (/home/ant1/packcity/front-pmd/node_modules/webpack/node_modules/schema-utils/dist/validate.js:104:11)
Despite my trials it is not working and the error persists.
Did someone succeed in adding the polyfill for Buffer in their app bundled with Webpack ?
Any help would be really appreciated.
I just solved my issue by adding
new webpack.ProvidePlugin({
Buffer: ['buffer', 'Buffer'],
}),
As suggested here https://github.com/ipfs/js-ipfs/issues/3369#issuecomment-721975183
I found this question when having a similar issue with Gatsby. To fix, I added:
exports.onCreateWebpackConfig = ({ actions }) => {
actions.setWebpackConfig({
plugins: [
new webpack.ProvidePlugin({
Buffer: [require.resolve("buffer/"), "Buffer"],
}),
]
}
}
to my gatsby-node.js config.
I have solved it this way in Gatsby. I didn't have to install the buffer dependency. Just added this to my gatsby-node.js file.
exports.onCreateWebpackConfig = ({ actions, stage, plugins }) => {
if (stage === 'build-javascript' || stage === 'develop') {
actions.setWebpackConfig({
plugins: [plugins.provide({ Buffer: ['buffer/', 'Buffer'] })]
});
}
};

How do I get a Slack message's id/timestamp from a Hubot script?

There doesn't seem to be a timestamp property and the id property is undefined. Here is the hubot's plugin script code:
module.exports = (robot) ->
robot.hear /\bclocks?\b/i, (msg) ->
msg.http("https://slack.com/api/reactions.add")
.query({
token: process.env.SLACK_API_TOKEN
name: "bomb"
timestamp: msg.message.timestamp # This property doesn't exist
})
.post() (err, res, body) ->
console.log(body)
return
The response that I get back from the Slack API is:
{"ok":false,"error":"bad_timestamp"}
When I log msg.message it looks like this:
{ user:
{ id: 'abc123',
name: 'travis',
room: 'test-bots',
reply_to: 'zyx987' },
text: 'clock',
id: undefined,
done: false,
room: 'test-bots' }
How can I get a timestamp or id for the message that triggered the listener?
I heard back from Slack's team and there is a newer property called rawMessage that you have access to when up upgrade to the newer API. Here's the steps I went through to get it working:
Upgrade nodejs & hubot (this may be optional, but our versions were very out of date)
Upgrade the slack-adapter and follow their instructions to connect to the newer API https://github.com/slackhq/hubot-slack#upgrading-from-earlier-versions-of-hubot
You should now get access to the newer properties from your scripts.
Here's the code that worked for me after upgrading:
https://gist.github.com/dieseltravis/253eb1c6fea97f116ab0
module.exports = (robot) ->
robot.hear /\bclocks?\b/i, (msg) ->
queryData = {
token: process.env.HUBOT_SLACK_TOKEN
name: "bomb"
channel: msg.message.rawMessage.channel # required with timestamp, uses rawMessage to find this
timestamp: msg.message.id # this id is no longer undefined
}
if (queryData.timestamp?)
msg.http("https://slack.com/api/reactions.add")
.query(queryData)
.post() (err, res, body) ->
#TODO: error handling
return

parse.com or iron.io returns ssl error

I use iron.io to call the following parse.com function to get Facebook details of my user's friends.
var getDetailsForID = function (fbID) {
var thePromise = new Parse.Promise();
// TODO: maybe we can batch several users together into a single request................
console.log("Enter getDetailsForID");
FB.api('/v1.0', 'post', {
batch: [
{ method: 'get', name: 'basic', relative_url: fbID + '?fields=id,name,gender&include_headers=false', omit_response_on_success: false },
]
}, function(res) {
console.log("Enter callback in getDetailsForID");
if(!res || res.error) {
console.log(!res ? 'error occurred' : res.error);
return;
}
console.log(" getDetailsForID res: " + res);
thePromise.resolve(res);
});
console.log("Exit getDetailsForID");
return thePromise;
}
In the iron.io log I see:
Enter callback in getDetailsForID
[Error: 139994800940864:error:0607907F:digital envelope routines:EVP_PKEY_get1_RSA:expecting an rsa key:../deps/openssl/openssl/crypto/evp/p_lib.c:288:
The following are not called:
console.log(" getDetailsForID res: " + res);
thePromise.resolve(res);
Any idea how to resolve this problem?
Since the answer to this question, IronWorker has released a Docker workflow. Feel free to use our official iron/node Docker Image. https://github.com/iron-io/dockerworker/tree/master/node
Ahh this is definitely not a problem with Iron.io but a problem with your post to the Facebook v1.0 API call.
+ '?fields=id,name,gender&include_headers=false', omit_response_on_success: false
do you really want to omit response on success? Which Facebook endpoint are you sending a Post to?
edit
IronWorker is currently set to 0.10.25 as of 07/22/2014, Use if your node version is < 0.10.25 you may receive this error.
fix: load your own version of node
in your .worker file add the following
deb "http://ppa.launchpad.net/chris-lea/node.js/ubuntu/pool/main/n/nodejs/nodejs_0.10.29-1chl1~trusty1_amd64.deb"
# OR you can download it from a local copy
deb "nodejs_0.10.29-1chl1~trusty1_amd64.deb"
You can install other missing or updated versions of binaries in a similar manner if there is a .deb for it.
Example in practice here on github
tldr.
use latest version of node, possibly openssl also.

Setup live magento 1.7 to local server

I m trying to setup magento live site to local system. I also gone through various forum, and tried all suggested solution but still I m not able to enter admin section.
I have modified the varian.php file as suggested, cleared the cache files under var and purged my browser but still can not log in to admin panel. It gives the following url
http://praveen.linuxstagedb.com/poppyshop/index.php/spsitemanager/dashboard/index/key/4d80fdfbf3d9f40ab36ac79a25fb12ab/
Note - spsitemanager used for admin.
Can anyone help please?
Thanks
Praveen Shukla
Remove the code that you have commented in that file and paste the below code in the same area.
// session cookie params
$cookieParams = array(
'lifetime' => $cookie->getLifetime(),
'path' => $cookie->getPath(),
//'domain' => $cookie->getConfigDomain(),
//'secure' => $cookie->isSecure(),
//'httponly' => $cookie->getHttponly()
);
if (!$cookieParams['httponly']) {
unset($cookieParams['httponly']);
if (!$cookieParams['secure']) {
unset($cookieParams['secure']);
if (!$cookieParams['domain']) {
unset($cookieParams['domain']);
}
}
}
if (isset($cookieParams['domain'])) {
$cookieParams['domain'] = $cookie->getDomain();
}

How to retrieve the list of all GitHub repositories of a person?

We need to display all the projects of a person in his repository on GitHub account.
How can I display the names of all the git repositories of a particular person using his git-user name?
You can use the github api for this. Hitting https://api.github.com/users/USERNAME/repos will list public repositories for the user USERNAME.
Use the Github API:
/users/:user/repos
This will give you all the user's public repositories. If you need to find out private repositories you will need to authenticate as the particular user. You can then use the REST call:
/user/repos
to find all the user's repos.
To do this in Python do something like:
USER='AUSER'
API_TOKEN='ATOKEN'
GIT_API_URL='https://api.github.com'
def get_api(url):
try:
request = urllib2.Request(GIT_API_URL + url)
base64string = base64.encodestring('%s/token:%s' % (USER, API_TOKEN)).replace('\n', '')
request.add_header("Authorization", "Basic %s" % base64string)
result = urllib2.urlopen(request)
result.close()
except:
print 'Failed to get api request from %s' % url
Where the url passed in to the function is the REST url as in the examples above. If you don't need to authenticate then simply modify the method to remove adding the Authorization header. You can then get any public api url using a simple GET request.
Try the following curl command to list the repositories:
GHUSER=CHANGEME; curl "https://api.github.com/users/$GHUSER/repos?per_page=100" | grep -o 'git#[^"]*'
To list cloned URLs, run:
GHUSER=CHANGEME; curl -s "https://api.github.com/users/$GHUSER/repos?per_page=1000" | grep -w clone_url | grep -o '[^"]\+://.\+.git'
If it's private, you need to add your API key (access_token=GITHUB_API_TOKEN), for example:
curl "https://api.github.com/users/$GHUSER/repos?access_token=$GITHUB_API_TOKEN" | grep -w clone_url
If the user is organisation, use /orgs/:username/repos instead, to return all repositories.
To clone them, see: How to clone all repos at once from GitHub?
See also: How to download GitHub Release from private repo using command line
Here is a full spec for the repos API:
https://developer.github.com/v3/repos/#list-repositories-for-a-user
GET /users/:username/repos
Query String Parameters:
The first 5 are documented in the API link above. The parameters for page and per_page are documented elsewhere and are useful in a full description.
type (string): Can be one of all, owner, member. Default: owner
sort (string): Can be one of created, updated, pushed, full_name. Default: full_name
direction (string): Can be one of asc or desc. Default: asc when using full_name, otherwise desc
page (integer): Current page
per_page (integer): number of records per page
Since this is an HTTP GET API, in addition to cURL, you can try this out simply in the browser. For example:
https://api.github.com/users/grokify/repos?per_page=2&page=2
Using the gh command
You can use the github cli for this:
$ gh api users/:owner/repos
or
gh api orgs/:orgname/repos
For all the repos you'll want --paginate and you can combine this with --jq to show only name for each repo:
gh api orgs/:orgname/repos --paginate --jq '.[].name' | sort
If you have jq installed, you can use the following command to list all public repos of a user
curl -s https://api.github.com/users/<username>/repos | jq '.[]|.html_url'
You probably need a jsonp solution:
https://api.github.com/users/[user name]/repos?callback=abc
If you use jQuery:
$.ajax({
url: "https://api.github.com/users/blackmiaool/repos",
jsonp: true,
method: "GET",
dataType: "json",
success: function(res) {
console.log(res)
}
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
The NPM module repos grabs the JSON for all public repos for some user or group. You can run this directly from npx so you don't need to install anything just pick an org or user ("W3C" here):
$ npx repos W3C W3Crepos.json
This will create a file called W3Crepos.json. Grep is good enough to e.g. fetch the list of repos:
$ grep full_name W3Crepos.json
pros:
Works with more than 100 repos (many answers to this question don't).
Not much to type.
cons:
Requires npx (or npm if you want to install it for real).
Retrieve the list of all public repositories of a GitHub user using Python:
import requests
username = input("Enter the github username:")
request = requests.get('https://api.github.com/users/'+username+'/repos')
json = request.json()
for i in range(0,len(json)):
print("Project Number:",i+1)
print("Project Name:",json[i]['name'])
print("Project URL:",json[i]['svn_url'],"\n")
Reference
There's now an option to use the awesome GraphQL API Explorer.
I wanted a list of all my org's active repos with their respective languages. This query does just that:
{
organization(login: "ORG_NAME") {
repositories(isFork: false, first: 100, orderBy: {field: UPDATED_AT, direction: DESC}) {
pageInfo {
endCursor
}
nodes {
name
updatedAt
languages(first: 5, orderBy: {field: SIZE, direction: DESC}) {
nodes {
name
}
}
primaryLanguage {
name
}
}
}
}
}
If looking for repos of an organisation-
api.github.com/orgs/$NAMEOFORG/repos
Example:
curl https://api.github.com/orgs/arduino-libraries/repos
Also you can add the per_page parameter to get all names just in case there is a pagination problem-
curl https://api.github.com/orgs/arduino-libraries/repos?per_page=100
Using Python
import requests
link = ('https://api.github.com/users/{USERNAME}/repos')
api_link = requests.get(link)
api_data = api_link.json()
repos_Data = (api_data)
repos = []
[print(f"- {items['name']}") for items in repos_Data]
If you want to get all the repositories in a list (array) you could do something like this:
import requests
link = ('https://api.github.com/users/{USERNAME}/repos')
api_link = requests.get(link)
api_data = api_link.json()
repos_Data = (api_data)
repos = []
[repos.append(items['name']) for items in repos_Data]
This will store all the repositories in the "repos" array.
Paging JSON
The JS code below is meant to be used in a console.
username = "mathieucaroff";
w = window;
Promise.all(Array.from(Array(Math.ceil(1+184/30)).keys()).map(p =>
fetch(`//api.github.com/users/{username}/repos?page=${p}`).then(r => r.json())
)).then(all => {
w.jo = [].concat(...all);
// w.jo.sort();
// w.jof = w.jo.map(x => x.forks);
// w.jow = w.jo.map(x => x.watchers)
})
HTML
<div class="repositories"></div>
JavaScript
// Github repos
If you wanted to limit the repositories list, you can just add ?per_page=3 after username/repos.
e.g username/repos?per_page=3
Instead of /username/, you can put any person's username on Github.
var request = new XMLHttpRequest();
request.open('GET','https://api.github.com/users/username/repos' ,
true)
request.onload = function() {
var data = JSON.parse(this.response);
console.log(data);
var statusHTML = '';
$.each(data, function(i, status){
statusHTML += '<div class="card"> \
<a href=""> \
<h4>' + status.name + '</h4> \
<div class="state"> \
<span class="mr-4"><i class="fa fa-star mr-2"></i>' + status.stargazers_count + '</span> \
<span class="mr-4"><i class="fa fa-code-fork mr-2"></i>' + status.forks_count + '</span> \
</div> \
</a> \
</div>';
});
$('.repositories').html(statusHTML);
}
request.send();
The answer is "/users/:user/repo", but I have all the code that does this in an open-source project that you can use to stand up a web-application on a server.
I stood up a GitHub project called Git-Captain that communicates with the GitHub API that lists all the repos.
It's an open-source web-application built with Node.js utilizing GitHub API to find, create, and delete a branch throughout numerous GitHub repositories.
It can be setup for organizations or a single user.
I have a step-by-step how to set it up as well in the read-me.
To get the user's 100 public repositories's url:
$.getJSON("https://api.github.com/users/suhailvs/repos?per_page=100", function(json) {
var resp = '';
$.each(json, function(index, value) {
resp=resp+index + ' ' + value['html_url']+ ' -';
console.log(resp);
});
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
const request = require('request');
const config = require('config');
router.get('/github/:username', (req, res) => {
try {
const options = {
uri: `https://api.github.com/users/${req.params.username}/repos?per_page=5
&sort=created:asc
&client_id=${config.get('githubClientId')}
&client_secret=${config.get('githubSecret')}`,
method: 'GET',
headers: { 'user-agent': 'node.js' }
};
request(options, (error, response, body) => {
if (error) console.log(error);
if (response.statusCode !== 200) {
res.status(404).json({ msg: 'No Github profile found.' })
}
res.json(JSON.parse(body));
})
} catch (err) {
console.log(err.message);
res.status(500).send('Server Error!');
}
});
Using Javascript Fetch
async function getUserRepos(username) {
const repos = await fetch(`https://api.github.com/users/${username}/repos`);
return repos;
}
getUserRepos("[USERNAME]")
.then(repos => {
console.log(repos);
});
Using the official GitHub command-line tool:
gh auth login
gh api graphql --paginate -f query='
query($endCursor: String) {
viewer {
repositories(first: 100, after: $endCursor) {
nodes { nameWithOwner }
pageInfo {
hasNextPage
endCursor
}
}
}
}
' | jq ".[] | .viewer | .repositories | .nodes | .[] | .nameWithOwner"
Note: this will include all of your public, private and other people's repos that are shared with you.
References:
https://cli.github.com/manual/gh_api
Slightly improved version of #joelazar's answer to get as a cleaned list:
gh repo list <owner> -L 400 |awk '{print $1}' |sed "s/<owner>\///"
Replace with the owner name, of course.
This can get lists with >100 repos as well (in this case, 400)
And in case you want to list repo names filtered by specific topic:
gh search repos --owner=<org> --topic=payments --json name --jq ".[].name" --limit 200
A bit more memorable (simplified) versions of this answer (upvoted) to get public repos (only):
# quick way to get all (up to 100) public repos for user mirekphd
$ curl -s "https://api.github.com/users/mirekphd/repos?per_page=100" | grep full_name | sort
The current method to get all (up to 100) private repos from Github API (see docs and this answer):
# get all private repos from the Github API
# after logging with Personal Access Token (assuming secure 2FA is used)
$ export GITHUB_USER=mirekphd && export GITHUB_TOKEN=$(cat <path_redacted>/personal-github-token) && curl -s --header "Authorization: Bearer $GITHUB_TOKEN" --request GET "https://api.github.com/search/repositories?q=user:$GITHUB_USER&per_page=100" | grep "full_name" | sort
Just do it in postman.
This way you can visualize it, run scripts, etc.
Check it out.
https://learning.postman.com/docs/sending-requests/visualizer/