league/flysystem-aws-s3-v3 get custom metada - metadata

I'm messing with the flysystem library (it is amazing, anyway!)
I created some files on the remote s3 bucket defining some custom metadata
$conf = [
'visibility' => 'public',
'Metadata' => [
'Content-type' => 'image/jpeg',
'Generated' => 'true'
]];
$response = $filesystem->write('/test/image.jpg', $image_stream, $conf);
'Generated' => 'true' is a custom metadata and I can find it in AWS Bucket's console.
I'm not able to read the custom metadata "generated" on filesystem resource after
$allFiles = $filesystem->listContents('/path/', true)->toArray();
##UPDATE 1
I understood that I should use the "getWithMetadata" plugin as explained In the documentation: https://flysystem.thephpleague.com/v1/docs/advanced/provided-plugins/#get-file-info-with-explicit-metadata
It seems pretty easy, but it seems there is not any League\Flysystem\Filesystem::addPlugin() method in my src.
Can anybody help me?

to use the S3 FlySystem adapter you need to init the S3 client
$client = new Aws\S3\S3Client($options);
In S3 SDK you could use
$allFiles = $filesystem->listContents($path)->sortByPath()->toArray();
$file = $allFiles[0];
$headers = $client->headObject(array(
"Bucket" => $bucket,
"Key" => $file->path()
));
print_r($headers->toArray());
to get the list of all file's metadata!

Related

Perl : Exporting a Google Spreadsheet Into xlsx Format Using Net::Google::Drive

I am trying to export google spreasheets into xlsx excel spreasheets using Perl.
I am currently able to get the data in the google spreadsheet from my script but I'd prefer to export the file into a local excel spreadsheet and get the information I need from there.
I can't manage to find a way to export a google spreadsheet into a local xlsx file using Net::Google::Drive
Only binary files can be obtained with the methods in that module.
has anyone managed to perform this ?
If yes, could you share some code ?
Thanks and regards.
X.
--
Here is my current code :
use Net::Google::Drive;
use Net::Google::DataAPI::Auth::OAuth2;
use Storable;
my $client_id = "XXXXXX.apps.googleusercontent.com";
my $client_secret = "YYYYY";
my $oauth2 = Net::Google::DataAPI::Auth::OAuth2->new(
client_id => $client_id,
client_secret => $client_secret,
scope => ['https://www.googleapis.com/auth/drive.readonly'],
);
my $session = retrieve('google_drive.session');
my $restored_token = Net::OAuth2::AccessToken->session_thaw($session,
auto_refresh => 1,
profile => $oauth2->oauth2_webserver,
);
$oauth2->access_token($restored_token);
my $disk = Net::Google::Drive->new(
-client_id => $client_id,
-client_secret => $client_secret,
-access_token => $oauth2->access_token($restored_token),
-refresh_token => $oauth2->refresh_token($restored_token),
);
my $file_id = "XXXXX";
my $dest_file = 'd:\\test.xlsx';
$disk->downloadFile(
-file_id => $file_id,
-dest_file => $dest_file,
);

How can I do multipart requests mith Mojo::UserAgent?

I'd like to perform a multipart file upload to Google Drive as described here using a Mojo::UserAgent. I currently do it like this:
my $url = Mojo::URL->new('https://www.googleapis.com/upload/drive/v3/files');
$url->query({ fields => 'id,parents',
ocr => 'true',
ocrLanguage => 'de',
uploadType => 'multipart' });
my $tx = $ua->post($url,
json => { parents => [ '0ByFk4UawESNUX1Bwak1Ka1lwVE0' ] },
{
'Content-Type' => 'multipart/related',
'parents' => [ '0ByFk4UawESNUX1Bwak1Ka1lwVE0' ]
},
$content );
but it doesn't work.
I've managed authorization OK (omitted here) and simple file upload works fine. I just can't seem to do the multipart.
I've tried to make sense of the docs here - but to no avail, in the sense that the file gets uploaded OK, but the JSON part gets ignored, and the file gets uploaded in the root folder.

dotenv-connector within TYPO3 CMS

I try to use helhum/dotenv-connector in my TYPO3 Project.
I have done the following:
my composer.json:
{
"require": {
"typo3/cms": "^8.5",
"helhum/dotenv-connector": "1.0.0",
"helhum/typo3-console": "^4.1"
},
"extra": {
"helhum/typo3-console": {
"install-extension-dummy": false
},
"typo3/cms": {
"cms-package-dir": "{$vendor-dir}/typo3/cms",
"web-dir": "web"
},
"helhum/dotenv-connector": {
"env-dir": "",
"allow-overrides": true,
"cache-dir": "var/cache"
}
}
}
Then I ran
composer install
After that I setup the TYPO3 using the command
php vendor/bin/typo3cms install:setup
This should be similar with doing the install the "normal" way.
After that, i placed a .env next to my composer.json
This .env contains the following:
TYPO3_CONTEXT="Development"
TYPO3__DB__database="dotenvconnector"
TYPO3__DB__host="127.0.0.1"
TYPO3__DB__password="root"
TYPO3__DB__port="3306"
TYPO3__DB__username="root"
Then i removed all informations about the DB from web/typo3conf/LocalConfiguration.php using the typo3_console-command
php vendor/bin/typo3cms configuration:remove DB
I then ran composer install and composer update again.
When calling the TYPO3 in the browser now, it keeps telling me
The requested database connection named "Default" has not been configured.
So what am i missing? Obviously my .env is not parsed or used at all.
FYI: Cachefile is written in var/cache with the following content:
<?php
putenv('TYPO3__DB__database=dotenvconnector');
$_ENV['TYPO3__DB__database'] = 'dotenvconnector';
$_SERVER['TYPO3__DB__database'] = 'dotenvconnector';
putenv('TYPO3__DB__host=localhost');
$_ENV['TYPO3__DB__host'] = 'localhost';
$_SERVER['TYPO3__DB__host'] = 'localhost';
putenv('TYPO3__DB__password=root');
$_ENV['TYPO3__DB__password'] = 'root';
$_SERVER['TYPO3__DB__password'] = 'root';
putenv('TYPO3__DB__port=3306');
$_ENV['TYPO3__DB__port'] = '3306';
$_SERVER['TYPO3__DB__port'] = '3306';
putenv('TYPO3__DB__username=root');
$_ENV['TYPO3__DB__username'] = 'root';
$_SERVER['TYPO3__DB__username'] = 'root';
Our setups work like this:
AdditionalConfiguration.php
$loader = new Dotenv\Dotenv(__DIR__ . '/../../', '.env.defaults');
$loader->load();
$loader = new Dotenv\Dotenv(__DIR__ . '/../../');
$loader->overload();
Interesting to see here that we run with a .env.defaults file that holds the standard config (no users or passwords of course) which we then overload with the custom .env file per user/environment.
This helps a lot when adding new functionality which requires a new .env configuration so other people on the team don't run into Fatals or Exceptions.
$GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default']['dbname'] = getenv('TYPO3_DB_NAME');
$GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default']['host'] = getenv('TYPO3_DB_HOST');
$GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default']['password'] = getenv('TYPO3_DB_PASSWORD');
$GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default']['user'] = getenv('TYPO3_DB_USER');
LocalConfiguration.php
return [
'BE' => [
'debug' => '<set by dotenv>',
'explicitADmode' => 'explicitAllow',
'installToolPassword' => '<set by dotenv>',
'loginSecurityLevel' => 'rsa',
'sessionTimeout' => '<set by dotenv>',
],
'DB' => [
'Connections' => [
'Default' => [
'charset' => 'utf8',
'dbname' => '<set by dotenv>',
'driver' => 'mysqli',
'host' => '<set by dotenv>',
'password' => '<set by dotenv>',
'port' => 3306,
'user' => '<set by dotenv>',
],
],
]...
I didn't paste the entire config but I think you get the point.
The dotenv-connector reads the .env file into the environment, but does not assign any values to TYPO3 configuration variables. You should be able to read them with getenv in your php code.
The connector is not specifically geared towards TYPO3, but is a general tool for any composer based php application. Therefore it would be out of the scope of the project, to know about the TYPO3 specific variable assignments.
There is another project, the configuration loader, that can help to assign environment variables to TYPO3 configuration variables.
.env -dotenv-connector-> environment -configuration-loader-> $GLOBALS['TYPO3_CONF_VARS']
The configuration loader can be found at https://github.com/helhum/config-loader . And an example of it all wired together in https://github.com/helhum/TYPO3-Distribution .
You don't have to use the configuration loader. You could also assign the values manually with getenv().
One important note with PHP 7.2 (on TYPO3 v9) and the usage of argon hash:
You must use single quotes / ticks for the values in the .env file.
Example:
Instead of my_value="foobar"
write my_value='foobar'

Google cloud storage: How can I reset edge cache?

I updated an image (from PHP) but still the old version of the image is downloaded.
If I download the image on the GCS console, I can download the new version of the image. However, this url below returns the old version.
https://storage.googleapis.com/[bucket name]/sample-image.png
It seems that the old image is on the Google's edge cache.
Some articles say that I should delete the image object then insert the new image object so that the edge cache is cleared.
Does anyone know about this?
Update 1
This is my PHP code which is on GCE.
$obj = new \Google_Service_Storage_StorageObject();
$obj->setName($path . "/" . $name);
$client = new \Google_Client();
$client->useApplicationDefaultCredentials();
$client->addScope(\Google_Service_Storage::DEVSTORAGE_FULL_CONTROL);
$storage = new \Google_Service_Storage($client);
$bucket = 'sample.com';
$binary = file_get_contents($_FILES['files']['tmp_name']);
$fileInfo = new finfo(FILEINFO_MIME_TYPE);
$mimeType = $fileInfo->buffer($binary);
$storage->objects->insert($bucket, $obj, [
'name' => $path . "/" . $name,
'data' => $binary,
'uploadType' => 'media',
'mimeType' => $mimeType,
]);
It seems that only these parameters are valid. I don't think I can set any cache settings.
// Valid query parameters that work, but don't appear in discovery.
private $stackParameters = array(
'alt' => array('type' => 'string', 'location' => 'query'),
'fields' => array('type' => 'string', 'location' => 'query'),
'trace' => array('type' => 'string', 'location' => 'query'),
'userIp' => array('type' => 'string', 'location' => 'query'),
'quotaUser' => array('type' => 'string', 'location' => 'query'),
'data' => array('type' => 'string', 'location' => 'body'),
'mimeType' => array('type' => 'string', 'location' => 'header'),
'uploadType' => array('type' => 'string', 'location' => 'query'),
'mediaUpload' => array('type' => 'complex', 'location' => 'query'),
'prettyPrint' => array('type' => 'string', 'location' => 'query'),
);
https://github.com/google/google-api-php-client/blob/master/src/Google/Service/Resource.php
I tried this way but not work so far. This is for only GAE...? (Or mounting may be necessary)
$image = file_get_contents($gs_name);
$options = [ "gs" => [ "Content-Type" => "image/jpeg"]];
$ctx = stream_context_create($options);
file_put_contents("gs://<bucketname>/".$fileName, $gs_name, 0, $ctx);
How do I upload images to the Google Cloud Storage from PHP form?
Update 2
API doc shows cacheControl property of Request body. I guess that using API directly (not via SDK) is a way. I will try it.
https://cloud.google.com/storage/docs/json_api/v1/objects/insert
cacheControl string Cache-Control directive for the object data. writable
I think I found it finally!
$obj->setCacheControl('no-cache');
Update 3
$bucket_name = 'my-bucket';
$file = "xxx.html";
$infotowrite = "999";
$service = new Google_Service_Storage($client);
$obj = new Google_Service_Storage_StorageObject();
$obj->setName($file);
$obj->setCacheControl('public, max-age=6000');
$results = $service->objects->insert(
$bucket_name,
$obj,
['name' => $file, 'mimeType' => 'text/html', 'data' => $infotowrite, 'uploadType' => 'media']
);
Set Cache-Control php client on Google Cloud Storage Object
We can check the result
gsutil ls -L gs://...
By default, if an object is publicly accessible to all anonymous users and you do not otherwise specify a cacheControl setting, GCS will serve a Cache-Control header of 3600 seconds, or 1 hour. If you're getting stale object data and haven't been messing with cache control settings, I assume you're serving publicly accessible objects. I'm not sure if Google itself is caching your object data or if there's some other cache between you and Google, though.
In the future, you can fix this by explicitly setting a shorter Cache-Control header, which can be controlled on a per-object basis with the cacheControl setting.
Right now, you can probably get around this by tacking on some made up extra URL query parameter, like ?ignoreCache=1
More: https://cloud.google.com/storage/docs/xml-api/reference-headers#cachecontrol

Image Upload with Zend_Service_Nirvanix

I can't seem to upload an image using Zend_Service_Nirvanix. Is it even possible?
I have a feeling that my problem has something to do with not being able to figure out how to set the UploadHost on the Transfer Service.
Any help is greatly appreciated! My deadline is July 16th!
Here is my code:
$nirvanix = new Zend_Service_Nirvanix(array('appKey' => $key,
'username' => $user,
'password' => pass));
$NSImfs = $nirvanix->getService('IMFS');
$options = array('sizeBytes' => filesize($source));
$storageNode = $NSImfs->getStorageNode($options);
$NSTransfer = $nirvanix->getService('Transfer');
$options = array('uploadToken' => $storageNode->getStorageNode->UploadToken,
'path' => $original,
'fileData' => file_get_contents($source));
$result = $NSTransfer->uploadFile($options);
Here is the error I keep getting:
Zend_Service_Nirvanix_Exception: XML
could not be parsed from response:
Server Error in '/' Application. The
resource cannot be found. Description:
HTTP 404. The resource you are looking
for (or one of its dependencies) could
have been removed, had its name
changed, or is temporarily
unavailable. Please review the
following URL and make sure that it is
spelled correctly.
Requested URL:
/ws/Transfer/UploadFile.ashx
in
/Applications/MAMP/bin/php5/lib/php/Zend/Service/Nirvanix/Response.php
on line 119
You're getting a 404?
Have you checked for an updated version of that library?
Try going into the libray and changing UploadFile.ashx to UploadFile.aspx. I don't think ashx is not a standard extension.
Maybe that will fix it.
There's a commercial upload tool from Aurigma that has support for file and image upload to Nirvanix. Here's the link (see Uploading to Nirvanix section there) to the help topic to check.
To do a local upload (rather than a web upload via the browser) you just have to call the putContents method passing the files data.
Example:
$nirvanix = new Zend_Service_Nirvanix(array('appKey' => $key,
'username' => $user,
'password' => pass));
$NSImfs = $nirvanix->getService('IMFS');
$response = $NSImfs->putContents($destination_file_and_path,
file_get_contents($source_file));
if($response->ResponseCode != 0)
{
echo 'Fail!';
}
You would only call GetStorageNode if you want to generate a token to pass a browser the upload token.