Inconsistency in GET BUCKET request - google-cloud-storage

I'm noticing differing results when listing the contents of folders within the same bucket, specifically, sometimes the home folder will be listed under the 'Contents' section (within the key element), but other times not. See the following two outputs:
This output does not include the prefixed directory
<?xml version='1.0' encoding='UTF-8'?>
<ListBucketResult xmlns='http://doc.s3.amazonaws.com/2006-03-01'>
<Name>
test22</Name> <=== Bucket
<Prefix>
16-Jul-2013</Prefix> <=== Prefixed folder
<Marker>
</Marker>
<IsTruncated>
false</IsTruncated>
<Contents>
<Key>
16-Jul-2013/0371.txt</Key> <=== ONLY OBJECTS LISTED
<Generation>
1374016944689000</Generation>
<MetaGeneration>
1</MetaGeneration>
<LastModified>
2013-07-16T23:22:24.664Z</LastModified>
<ETag>
"5d858b3ddbf51fb5ec4501799e637b47"</ETag>
<Size>
96712</Size>
<Owner>
<ID>
00b4903a97d860d9d5a7d98a1c6385dc6146049499b88ceae217eaee7a0b2ff4</ID>
</Owner>
</Contents>
But this output does
<?xml version='1.0' encoding='UTF-8'?>
<ListBucketResult xmlns='http://doc.s3.amazonaws.com/2006-03-01'>
<Name>
test22</Name> <=== Bucket
<Prefix>
22-Aug-2013</Prefix> <=== Prefixed folder
<Marker>
</Marker>
<IsTruncated>
false</IsTruncated>
<Contents>
<Key>
22-Aug-2013/</Key> <=== FOLDER INCLUDED IN LIST
<Generation>
1377178774399000</Generation>
<MetaGeneration>
1</MetaGeneration>
<LastModified>
2013-08-22T13:39:34.337Z</LastModified>
<ETag>
"d41d8cd98f00b204e9800998ecf8427e"</ETag>
<Size>
0</Size>
<Owner>
<ID>
00b4903a97d0b7e1f638009476bba4c5d964f744e50c23c3681357a290cb7b16</ID>
</Owner>
</Contents>
Both requests were made with the following code (note I did not use an authenticated session, the items are pubilc-readable):
uri = URI('https://storage.googleapis.com/test22?prefix=16-Jul-2013') <=== prefix changed for each case
req3 = Net::HTTP::Get.new(uri.request_uri)
#req3['Authorization'] = "#{token['token_type']} #{token['access_token']}"
req3['Content-Length'] = 0
req3['content-Type'] = 'text/plain - GB'
req3['Date'] = Time.now.strftime("%a, %d %b %Y %H:%M:%S %Z")
req3['Host'] = 'storage.googleapis.com'
req3['x-goog-api-version'] = 2
req3['x-goog-project-id'] = ###############
Net::HTTP.start(uri.host, uri.port, :use_ssl => uri.scheme == 'https') { |http|
resp3 = http.request(req3)
puts resp3.body.gsub(/>/, ">\n")
}
Why the difference? Is there something basic I'm missing?
Thanks in advance...
-Lee

When you create a folder using the Cloud Console, it creates a placeholder object with the name of the folder + '/' to represent the empty folder. Even if you later add objects to the folder, the placeholder remains.
On the other hand, if you directly upload an object with a '/' in the name using the API (for example an upload to 'folder/object.txt') no placeholder object is created because the presence of the object is enough to infer the existence of the folder. If you delete 'folder/object.txt', the folder will no longer be listed in the root listing of the Cloud Console as there is no placeholder object.
To answer your question explicitly, that means that '16-Jul-2013/0371.txt' was created via a direct upload to '16-Jul-2013/0371.txt'. By contrast, '22-Aug-2013/' was created by the New Folder button in the Cloud Console. In the latter case a placeholder object is created, in the former, it is not.
All of this is because the GCS namespace is flat, not hierarchical. The folder abstraction is there to help you visualize things hierarchically, but it has some limitations.

Related

How to resolve issue :: file_get_contents(http://localhost:8085/assets/data/test.txt): Failed to open stream: HTTP request failed

By using CodeIgniter 4 framework, I've developed RESTful api and there I need to access file (.json and .txt) to get content. But not able to access php inbuilt function file_get_contents().
For more details, pls check attached screenshot API_curl_file_get_content_error.PNG
And test.txt file is also accessible with same file path. For more details pls check screenshot Input-txt-file-content.png
NOTE : 1) test.txt file and respective directories have full permission.
2) Development environment :
<---->Apache/2.4.47 (Win64) OpenSSL/1.1.1k PHP/8.1.2
<---->Database client version: libmysql - mysqlnd 8.1.2
<---->PHP version: 8.1.2
<---->CodeIgniter 4
Product.php (index method)
<?php
namespace App\Controllers;
use CodeIgniter\RESTful\ResourceController;
class Product extends ResourceController
{
use \CodeIgniter\API\ResponseTrait;
public function index()
{
helper("filesystem");
ini_set('max_execution_time', 300);
$data['msg'] = "Product Index wizard";
$data['status'] = 200;
$file_path = base_url() . '/assets/data/test.txt'; // Read JSON
$json = file_get_contents($file_path, true);
$json_data = json_decode($json, true);
return $this->respond($data);
}
}
Explanation:
Remember that "http://localhost:8085/" most likely points to the document root of the project, which is usually the /public path. So, unless the "assets" folder resides in the /public path, file_get_contents("http://localhost:8085/assets/data/test.txt"); will fail to find the requested server resource.
Solution:
Since your file resource (test.txt) is on the local filesystem,
Instead of:
file_get_contents("http://localhost:8085/assets/data/test.txt");
Use this:
constant ROOTPATH
The path to the project root directory. Just above APPPATH.
file_get_contents(ROOTPATH . "assets/data/test.txt");
Addendum:
I believe you also forgot to add the $json_data output to the returned $data variable in the Product::index resource controller method.

How to change the metadata of all specific file of exist objects in Google Cloud Storage?

I have uploaded thousands of files to google storage, and i found out all the files miss content-type,so that my website cannot get it right.
i wonder if i can set some kind of policy like changing all the files content-type at the same time, for example, i have bunch of .html files inside the bucket
a/b/index.html
a/c/a.html
a/c/a/b.html
a/a.html
.
.
.
is that possible to set the content-type of all the .html files with one command in the different place?
You could do:
gsutil -m setmeta -h Content-Type:text/html gs://your-bucket/**.html
There's no a unique command to achieve the behavior you are looking for (one command to edit all the object's metadata) however, there's a command from gcloud to edit the metadata which you could use on a bash script to make a loop through all the objects inside the bucket.
1.- Option (1) is to use a the gcloud command "setmeta" on a bash script:
# kinda pseudo code here.
# get the list with all your object's names and iterate over the metadata edition command.
for OUTPUT in $(get_list_of_objects_names)
do
gsutil setmeta -h "[METADATA_KEY]:[METADATA_VALUE]" gs://[BUCKET_NAME]/[OBJECT_NAME]
# the "gs://[BUCKET_NAME]/[OBJECT_NAME]" would be your object name.
done
2.- You could also create a C++ script to achieve the same thing:
namespace gcs = google::cloud::storage;
using ::google::cloud::StatusOr;
[](gcs::Client client, std::string bucket_name, std::string object_name,
std::string key, std::string value) {
# you would need to find list all the objects, while on the loop, you can edit the metadata of the object.
for (auto&& object_metadata : client.ListObjects(bucket_name)) {
string bucket_name=object_metadata->bucket(), object_name=object_metadata->name();
StatusOr<gcs::ObjectMetadata> object_metadata =
client.GetObjectMetadata(bucket_name, object_name);
gcs::ObjectMetadata desired = *object_metadata;
desired.mutable_metadata().emplace(key, value);
StatusOr<gcs::ObjectMetadata> updated =
client.UpdateObject(bucket_name, object_name, desired,
gcs::Generation(object_metadata->generation()))
}
}

Automatically create i18n directory for VSCode extension

I am trying to understand the workflow presented in https://github.com/microsoft/vscode-extension-samples/tree/master/i18n-sample for localizing Visual Studio Code extensions.
I cannot figure out how the i18n directory gets created to begin with, as well as how the set of string keys in that directory get maintained over time.
There is one line in the README.md which says "You could have created this folder by hand, or you could have used the vscode-nls-dev tool to extract it."...how would one use vscode-nls-dev tool to extract it?
What I Understand
I understand that you can use vscode-nls, and wrap strings like this: localize("some.key", "My String") to pick up the localized version of that string at runtime.
I am pretty sure I understand that vscode-nls-dev is used at build time to substitute the content of files in the i18n directory into the transpiled JavaScript code, as well as creating files like out/extension.nls.ja.json
What is missing
Surely it is not expected that: for every file.ts file in your project you create an i18n/lang/out/file.i18n.json for every lang you support...and then keep the set of keys in that file up to date manually with every string change.
I am assuming that there is some process which automatically goes "are there any localize("key", "String") calls in file.ts for new keys not yet in file.i18n.json? If so, add those keys with some untranslated values". What is that process?
I have figured this out, referencing https://github.com/Microsoft/vscode-extension-samples/issues/74
This is built to work if you use Transifex for your translator. At the bare minimum you need to use .xlf files as your translation file format.
I think that this is best illustrated with an example, so lets say you wanted to get the sample project working after you had deleted the i18n folder
Step 1: Clone that project, and delete the i18n directory
Step 2: Modify the gulp file so that the compile function also generates nls metadata files in the out directory. Something like:
function compile(buildNls) {
var r = tsProject.src()
.pipe(sourcemaps.init())
.pipe(tsProject()).js
.pipe(buildNls ? nls.rewriteLocalizeCalls() : es.through())
.pipe(buildNls ? nls.createAdditionalLanguageFiles(languages, 'i18n', 'out') : es.through())
.pipe(buildNls ? nls.bundleMetaDataFiles('ms-vscode.node-debug2', 'out') : es.through())
.pipe(buildNls ? nls.bundleLanguageFiles() : es.through())
Step 3: Run the gulp build command. This will generate several necessary metadata files in the out/ directory
Step 4: Create and run a new gulp function to export the necessarry translations to the xlf file. Something like:
gulp.task('export-i18n', function() {
return gulp.src(['package.nls.json', 'out/nls.metadata.header.json', 'out/nls.metadata.json'])
.pipe(nls.createXlfFiles("vscode-extensions", "node-js-debug2"))
.pipe(gulp.dest(path.join('vscode-translations-export')));
}
Step 5: Get the resulting xlf file translated. Or, add some dummy values. I cant find if/where there is documentation for the file format needed, but this worked for me (for the extension):
<?xml version="1.0" encoding="utf-8"?>
<xliff version="1.2" xmlns="urn:oasis:names:tc:xliff:document:1.2">
<file original="package" source-language="en" target-language="ja" datatype="plaintext"><body>
<trans-unit id="extension.sayHello.title">
<source xml:lang="en">Hello</source>
<target>JA_Hello</target>
</trans-unit>
<trans-unit id="extension.sayBye.title">
<source xml:lang="en">Bye</source>
<target>JA_Bye</target>
</trans-unit>
</body></file>
<file original="out/extension" source-language="en" target-language="ja" datatype="plaintext"><body>
<trans-unit id="sayHello.text">
<source xml:lang="en">Hello</source>
<target>JA_Hello</target>
</trans-unit>
</body></file>
<file original="out/command/sayBye" source-language="en" target-language="ja" datatype="plaintext"><body>
<trans-unit id="sayBye.text">
<source xml:lang="en">Bye</source>
<target>JA_Bye</target>
</trans-unit>>
</body></file>
</xliff>
Step 6: Stick that file in some known location, let's say /path/to/translation.xlf. Then add/run another new gulp task to import the translation. Something like:
gulp.task('i18n-import', () => {
return es.merge(languages.map(language => {
console.log(language.folderName)
return gulp.src(["/path/to/translation.xlf"])
.pipe(nls.prepareJsonFiles())
.pipe(gulp.dest(path.join('./i18n', language.folderName)));
}));
});
Step 7: Run the gulp build again.
The i18n/ directory should now be recreated correctly! Running the same build/export/translate/import/build steps will pick up any new changes to the localize() calls in your TypeScript code
Obviously this is not perfect, there are a lot of hardcoded paths and such, but hopefully it helps out anyone else who hits this issue.

Drools - Understanding this example with drl file stored in a different location than the Kbase name attribute of the kmodule file

I was looking at this example with the KieBase defined as follows:
<?xml version="1.0" encoding="UTF-8"?>
<kmodule
xmlns="http://jboss.org/kie/6.0.0/kmodule">
<kbase name="kbase1">
<ksession name="ksession1"/>
</kbase>
</kmodule>
I was expecting the drl file to be in src/main/resources/kbase1. I made this assumption because reading the book Mastering JBoss Drools 6, it was indicated that the kbase name is where the drl file should be placed. The sentence reads below:
'Notice that the name attribute of the KieBase is the same as the directory structure that we are using under /src/test/resources/ directory, where the rules are stored. '
However, in the example, the drl file in placed in the following directory:
src/main/resources/namedkiesession.
Here is the file: HAL1.drl
package org.drools.example.api.namedkiesession
import org.drools.example.api.namedkiesession.Message
global java.io.PrintStream out
rule "rule 1" salience 10 when
m : Message( )
then
out.println( m.getName() + ": " + m.getText() );
end
rule "rule 2" when
Message( text == "Hello, HAL. Do you read me, HAL?" )
then
insert( new Message("HAL", "Dave. I read you." ) );
end
How does this work if the KieBase name is not where the actual drl file is stored? The full source code is here:
https://github.com/kiegroup/drools/blob/6.0.x/drools-examples-api/pom.xml
You have just found an error (or typo) in the book. The name of a KieBase is an arbitrary name you choose and has no relation with the directory structure where the .drl files are located.
The attribute of a <kbase> element that you can use to tell what packages (directories) to include is the packages attribute and not the name attribute If the packages attribute is not specified, then the KieBase will include any artifact (drl, bpmn, dtable, etc.) that is located under the resource directory.
You can find a better explanation of the attributes of a <kbase> in the official Drools' documentation (look for table 4.1).
Note that starting from version 7.18, the semantics of the packages attributes suffered a slight (but fundamental) change as stated in the documentation.
Hope it helps,

Is it possible to replace a file from one repository to another with android repo tool?

I'm using repo tool to build a Yocto project, the repositories used are A, B, yocto ..., and I need to replace a file from A to B, the structure is something like this:
A/MyFile.sh
B/TheFile.sh
yocto/Some_dirs_and_files
So, I use the copyfile like this:
<?xml version="1.0" encoding="UTF-8"?>
<manifest>
<remote fetch="mygitrepo" name="origin"/>
<default remote="origin"/>
<project name="yocto" revision="myrevision"/>
<project name="meta-openembedded" path="yocto/meta-openembedded" revision="myrevision"/>
<project name="B" path="yocto/B" revision="myrevision"/>
<project name="C" path="yocto/meta-swi-extras" revision="myrevision"/>
<project name="poky" path="yocto/poky" revision="myrevision"/>
<project name="A" path="yocto/custom-builds" revision="myrevision">
<copyfile src="MyFile.sh" dest="yocto/B/TheFile.sh"/>
</project>
</manifest>
The problem is that the copyfile is not replacing the file "TheFile.sh" with "MyFile.sh"
Is there a way to do it without an additional script?
Note: If I change the dest name from
dest="yocto/B/TheFile.sh
to
dest="yocto/B/AnotherFile.sh
the file is succesfully copied, but if I set the name to the file I want to replace it doesn't.
It seems repo do now allow overwrite file by <copyfile src=.. dest ...>
From repo source code project.py
class _CopyFile(object):
def __init__(self, src, dest, abssrc, absdest):
self.src = src
self.dest = dest
self.abs_src = abssrc
self.abs_dest = absdest
def _Copy(self):
src = self.abs_src
dest = self.abs_dest
# copy file if it does not exist or is out of date
if not os.path.exists(dest) or not filecmp.cmp(src, dest): ※
※line shows the condition to do a file copy.
code below your pice
if not os.path.exists(dest) or not filecmp.cmp(src, dest):
try:
# remove existing file first, since it might be read-only
if os.path.exists(dest):
platform_utils.remove(dest)
else:
dest_dir = os.path.dirname(dest)
if not platform_utils.isdir(dest_dir):
os.makedirs(dest_dir)
shutil.copy(src, dest)
# make the file read-only
mode = os.stat(dest)[stat.ST_MODE]
mode = mode & ~(stat.S_IWUSR | stat.S_IWGRP | stat.S_IWOTH)
os.chmod(dest, mode)
so it can replace ...
is this still open?
I just happened to come across the same issue, wanting to overwrite a file from repository2 with a file from repository1, the <copyfile> was in place to replace myfile from repository2 with myfile from repository1. I was using a yocto distribution (with several layers, as git repositories).
But it did not work.
Copying myfile from repository1 as myfile2 to repository2 (another name) worked.
What I discovered was that running the repo init/sync commands several times I did not get the repos to be populated in the same order.
So basically my <copyfile> did what is was supposed to do when repository1 was populated, but that happened before repository2 was populated (even if they were in the right order in the manifest file). And repository2 simply brought its own myfile, overwriting the one copied by repository1.
My mega-solution was to use two <copyfile> tags: one in repository1 to copy myfile as myfile2 into repository2, and the other in repository2 to copy myfile2 as myfile.
You have to make sure though that repository1 is always populated before repository2.
This all thing is very strange, since repo does not guarantee the order in which repositoruies are populated.