I am using ArtifactoryGenericDownload#3 task to download .whl file from JFrog artifactory. However I want to only download the latest version which is python/de-cf-dnalib/0.7.0 but this cannot be hardcoded because the version needs to be updated from time to time. Could you please suggest any solution on how to add version control to my code ?
task:
ArtifactoryGenericDownload#3
inputs:
connection: "JFROG"
specSource: "taskConfiguration"
fileSpec: |
{
"files": [
{
"pattern": "python/*.whl",
"target": "./$(Pipeline.Workspace)/de-cf-dnalib"
}
]
}
failNoOp: true
result:
{
"files": [
{
"pattern": "python/de-cf-dnalib/*.whl",
"target": ".//datadisk/agents-home/...work/744/de-cf-dnalib"
}
]
}
Executing JFrog CLI Command: /datadisk/hostedtoolcache/jfrog/1.53.2/x64/jfrog rt dl --url="https://jfrog.io/artifactory" --access-token=*** --spec="/datadisk/agents-home/agent-0/azl-da-d-02-0/_work/744/s/downloadSpec1656914680005.json" --fail-no-op=true --dry-run=false --insecure-tls=false --threads=3 --retries=3 --validate-symlinks=false --split-count=3 --min-split=5120
[Info] Searching items to download...
[Info] [Thread 2] Downloading python/de-cf-dnalib/0.5.0/de_cf_dnalib-0.5.0-py3-none-any.whl
[Info] [Thread 1] Downloading python/de-cf-dnalib/0.6.0/de_cf_dnalib-0.6.0-py3-none-any.whl
[Info] [Thread 0] Downloading python/de-cf-dnalib/0.7.0.dev0/de_cf_dnalib-0.7.0.dev0-py3-none-any.whl
[Info] [Thread 2] Downloading python/de-cf-dnalib/0.7.0/de_cf_dnalib-0.7.0-py3-none-any.whl
{
"status": "success",
"totals": {
"success": 4,
"failure": 0
}
}
Artifactory from Jfrog
fileSpec also supports filtering by Artifactory Query Language (AQL) instead of pattern.
With AQL, you can sort by version of create date, and get only the latest recorded file, for example:
items.find({
"repo": "my-repo",
"name": {"$match":"*.jar"}
}).include("name","created").sort({"$desc": ["created"]}).limit(2)
You can read more about AQL in the following link:
https://www.jfrog.com/confluence/display/JFROG/Artifactory+Query+Language
Related
UPDATE: I was able to get this working by setting "ProduceReferenceAssembly" to false in the .csproj files of the libs. Not sure if this is optimal or intended but that is what worked for me. See: Ref folder within .NET 5.0 bin folder
I'm trying to set up a proof of concept using NX dot net and Azure using this exaple .yml: https://nx.dev/recipes/ci/monorepo-ci-azure
I have 3 services (libs) and 3 apis (apps) ... I made a change to one of the apis to test caching and incremental builds.
The unchanged projects all say [remote cache] but then the build fails because it's looking for the .dlls in the /obj/Debug/ directory. Why use that when there are .dlls in the /dist directory?
How can I fix this? Is there something in the nx.json or project.json files I need to change?
(https://i.stack.imgur.com/IQhaO.png)
I tried using the same command locally on my machine and it completes as expected. I expect the build to complete. The build fails when remote caching is used.
{
"name": "ShipmentService",
"$schema": "../../node_modules/nx/schemas/project-schema.json",
"projectType": "library",
"sourceRoot": "libs/ShipmentService",
"targets": {
"build": {
"executor": "#nx-dotnet/core:build",
"outputs": [
"{workspaceRoot}/dist/libs/ShipmentService",
"{workspaceRoot}/libs/ShipmentService/obj"
],
"options": {
"configuration": "Debug",
"noDependencies": true
},
"configurations": {
"production": {
"configuration": "Release"
}
}
},
"lint": {
"executor": "#nx-dotnet/core:format"
}
},
"tags": []
}
Tried proposed workaround, here's what I'm noticing: platformservice:build [remote cache]
Error, it sees the intermediates part, but basically same issue: same error
Updated project.json (all of them have been updated to look similar to this [tried with and without /obj portion]):
"outputs": [
"{workspaceRoot}/dist/libs/ShipmentService",
"{workspaceRoot}/dist/intermediates/libs/ShipmentService/obj"
],
This is a bug on nx-dotnet's side, and we aren't quite capturing all of the outputs that are needed for the cache. If you add the path to the obj directory into the outputs array of the build target in project.json it should work. Here's the workaround which will eventually be migrated:
I've got a branch with this working, you do indeed need the obj directory as part of the cache. There are some weird intricacies with this though. I'll work on a migration + patch. In the meantime, the workaround that I used is:
Update Directory.Build.props adding these to the property group containing the output path manipulation:
<BaseIntermediateOutputPath>$(RepoRoot)dist/intermediates/$(ProjectRelativePath)/obj</BaseIntermediateOutputPath>
<IntermediateOutputPath>$(BaseIntermediateOutputPath)</IntermediateOutputPath>
As an example, the full file looks like this on the nx-dotnet repo now:
<Project>
<PropertyGroup>
<!-- Output path configuration -->
<RepoRoot>$([System.IO.Path]::GetFullPath('$(MSBuildThisFileDirectory)'))</RepoRoot>
<ProjectRelativePath>$([MSBuild]::MakeRelative($(RepoRoot), $(MSBuildProjectDirectory)))</ProjectRelativePath>
<BaseOutputPath>$(RepoRoot)dist/$(ProjectRelativePath)</BaseOutputPath>
<OutputPath>$(BaseOutputPath)</OutputPath>
<BaseIntermediateOutputPath>$(RepoRoot)dist/intermediates/$(ProjectRelativePath)/obj</BaseIntermediateOutputPath>
<IntermediateOutputPath>$(BaseIntermediateOutputPath)</IntermediateOutputPath>
<AppendTargetFrameworkToOutputPath>true</AppendTargetFrameworkToOutputPath>
</PropertyGroup>
<PropertyGroup>
<RestorePackagesWithLockFile>false</RestorePackagesWithLockFile>
</PropertyGroup>
</Project>
Your project.json file should look something like this now:
{
"name": "demo-webapi",
"sourceRoot": "demo/apps/webapi",
"targets": {
"build": {
"executor": "#nx-dotnet/core:build",
"outputs": [
"{workspaceRoot}/dist/demo/apps/webapi",
"{workspaceRoot}/dist/intermediates/demo/apps/webapi"
],
"options": {
"configuration": "Debug",
"noDependencies": true
},
"configurations": {
"production": {
"configuration": "Release"
}
}
}
}
}
I am executing EMR (spark-submit) through airflow 2.0 and I am submitting steps as follows:
My s3://dbook/ buckets all files needed for spark-submit, first I am copying all files to EMr(Copy S3 to EMR) and then executing the spark-submit command, but I am getting an error called
"no module named config". I need to know what args is being sent to EMR clsuter. How to achieve this?
SPARK_STEPS = [
{
'Name': 'Copy S3 to EMR',
"ActionOnFailure": "CANCEL_AND_WAIT",
'HadoopJarStep': {
"Jar": "command-runner.jar",
"Args": ['aws' ,'s3', 'cp' ,'s3://dbook/', '.', '--recursive'],
},
},
{
'Name': 'Spark-Submit Command',
"ActionOnFailure": "CANCEL_AND_WAIT",
'HadoopJarStep': {
"Jar": "command-runner.jar",
"Args": [
'spark-submit',
'--py-files',
'config.zip,jobs.zip',
'main.py'],
},
}
]
Thanks,
Xi
After installing extensions in typo3 CMS 8.7.27, I got following error.. Seems like the ExtensionManagementUtility can't load the ah_contentapi.
This is my composer.json file in root (/var/www/html/typo3) for loading my extensions:
{
"repositories":[
{
"type":"composer",
"url":"https://composer.typo3.org/"
},
{
"type":"package",
"package":{
"name":"Bm/ah-content-api",
"version":"0.0.1",
"type":"typo3-cms-extension",
"source":{
"url":"https://user#bitbucket.org/company/ah_config_typo3.git",
"type":"git",
"reference":"master"
}
}
},
{
"type":"package",
"package":{
"name":"Bm/ah-contentelements",
"version":"0.0.1",
"type":"typo3-cms-extension",
"source":{
"url":"https://user#bitbucket.org/company/ah_contentelements_typo3.git",
"type":"git",
"reference":"master"
}
}
}
],
"name":"typo3/cms-base-distribution",
"description":"TYPO3 CMS Base Distribution",
"license":"GPL-2.0-or-later",
"require":{
"helhum/typo3-console":"^4.9.3 || ^5.2",
"typo3/cms-about":"^8.7.10",
"typo3/cms-belog":"^8.7.10",
"typo3/cms-beuser":"^8.7.10",
"typo3/cms-context-help":"^8.7.10",
"typo3/cms-documentation":"^8.7.10",
"typo3/cms-felogin":"^8.7.10",
"typo3/cms-fluid-styled-content":"^8.7.10",
"typo3/cms-form":"^8.7.10",
"typo3/cms-func":"^8.7.10",
"typo3/cms-impexp":"^8.7.10",
"typo3/cms-info":"^8.7.10",
"typo3/cms-info-pagetsconfig":"^8.7.10",
"typo3/cms-rte-ckeditor":"^8.7.10",
"typo3/cms-setup":"^8.7.10",
"typo3/cms-sys-note":"^8.7.10",
"typo3/cms-t3editor":"^8.7.10",
"typo3/cms-tstemplate":"^8.7.10",
"typo3/cms-viewpage":"^8.7.10",
"typo3/cms-wizard-crpages":"^8.7.10",
"typo3/cms-wizard-sortpages":"^8.7.10",
"typo3/cms":"^8.7",
"dmitryd/typo3-realurl":"2.*",
"GridElementsTeam/Gridelements":"8.2.*",
"clickstorm/cs_seo":"3.*",
"Bm/ah-content-api":"0.0.1",
"Bm/ah-contentelements":"0.0.1"
},
"scripts":{
"typo3-cms-scripts":[
"typo3cms install:fixfolderstructure",
"typo3cms install:generatepackagestates"
],
"post-autoload-dump":[
"#typo3-cms-scripts"
]
},
"extra":{
"typo3/cms":{
"web-dir":"public"
},
"helhum/typo3-console":{
"comment":"This option is not needed ay more for helhum/typo3-console 5.x",
"install-extension-dummy":false
}
},
"autoload":{
"psr-4":{
"Bm\\AhContentelements\\":"public/typo3conf/ext/ah_contentelements/Classes",
"Bm\\AhContentapi\\":"public/typo3conf/ext/ah_content_api/Classes"
}
}
}
I already cleared cache in install tool at:
1. -> important actions -> clear all cache
2. -> clean up -> Clean typo3temp/ folder
piece from composer.lock:
{
"_readme": [
"This file locks the dependencies of your project to a known state",
"Read more about it at https://getcomposer.org/doc/01-basic-usage.md#installing-dependencies",
"This file is #generated automatically"
],
"content-hash": "954afd2318d54ec9b1dd0e4d7f9b445b",
"packages": [
{
"name": "Bm/ah-content-api",
"version": "0.0.1",
"source": {
"type": "git",
"url": "https://stevenhippovibe#bitbucket.org/hippovibe/ah_config_typo3.git",
"reference": "master"
},
"type": "typo3-cms-extension"
},
{
"name": "Bm/ah-contentelements",
"version": "0.0.1",
"source": {
"type": "git",
"url": "https://stevenhippovibe#bitbucket.org/stevenhippovibe/ah_contentelements_typo3.git",
"reference": "master"
},
"type": "typo3-cms-extension"
},
The Error occurs when the extension folder name under typo3conf/ext/<folder_name> doesn't match extension key used in some places of the system (e.g. using EXT:your_extension_key/... syntax in TypoScript).
Changing folder name fixed similar problem for me.
Check the PHP version and try to change it from i.e. 7.4 to 7.3.
I once had this problem with an extension that should be compatible with PHP 7.4, but wasn't in real life. This solved the problem for me.
Question here is:
How did you update to 8.7.27 (which composer command was executed)
How does your composer.lock look like?
Do you use TYPO3 console or any other special composer plugins / CLI commands to e.g. generate PackageStates.php?
I just ran into the same error message under TYPO3 9.5.5.
Solution:
Deinstall one TYPO3 extension after the other and try it out again. This will lead you to the extension which has an error. Most probably the error is inside of the file ext_localconf.php or ext_tables.php .
I got this error detail:
PHP Warning: Use of undefined constant FH_DEBUG_EXT - assumed 'FH_DEBUG_EXT' (this will throw an Error in a future version of PHP) in /var/www/html/global-extensions/ext/div2007/ext_localconf.php line 15
This has nothing to do with your error. But it can be that you have an error in one of your installed extensions or even in a backup of an extension, e.g. a folder named as extensionname.bak .
Also these recommendations can help:
https://wiki.typo3.org/Exception/CMS/1476107295
we use to follow instruction here! to set the bucket lifecycle policy, but with the latest gcloud components update, we are getting an error like this:
Failure: Unsupported tag SetStorageClass.
search the gcs storage lifecycle doc did not fund any update.
The command we used is gsutil lifecycle set <json file> gs://<bucket name>/
and gsutil version: 4.25
{
"lifecycle":{
"rule":[
{
"action":{
"type":"SetStorageClass",
"storageClass":"NEARLINE"
},
"condition":{
"age":30,
"matchesStorageClass":[
"REGIONAL",
"STANDARD",
"DURABLE_REDUCED_AVAILABILITY"
]
}
}
]
}
}
EDIT 2
This was fixed in this GitHub commit, which has been included in the newest version (v4.26) of gsutil.
EDIT
It looks like you actually uncovered a bug that occurs when using the XML API. I've opened a GitHub issue an will work on fixing this ASAP:
https://github.com/GoogleCloudPlatform/gsutil/issues/427
Thanks for the report!
Looking at the code in the Boto library, you're probably trying to specify SetStorageClass a JSON key:
{
...
"SetStorageClass": ...
...
}
rather than making it the value of the action's type attribute. Here's an example using your (fixed) sample from a question comment:
{
"lifecycle": {
"rule": [
{
"action": {
"type": "SetStorageClass",
"storageClass": "NEARLINE"
},
"condition": {
"age":30,
"matchesStorageClass": ["STANDARD", "DURABLE_REDUCED_AVAILABILITY"]
}
}
]
}
}
I have Sensu running and followed the instructions the best I could to install the Slack plugin. I'm attempting to just do a "hello-world" to get started, but the documentation seems lacking to me.
I followed the "getting started" with checks:
https://sensuapp.org/docs/0.20/getting-started-with-checks
and everything seems to be in the correct place on the server.
I am attempting to install the following community plugin, but they have a catch-all instruction for all community plugins. There is a json file in the plugin instructions, but doesn't say where to put it...
https://github.com/sensu-plugins/sensu-plugins-slack
Here is what my check_cron.json looks like ( I tried 2 methods, 1 from another source other than Sensu):
{
"checks": {
"cron_checks": {
"handlers": ["default", "slack"],
"command": "/etc/sensu/plugins/check-procs.rb -p cron -C 1 ",
"interval": 60, "subscribers": ["webservers"]
},
"cron": {
"handlers": ["default", "slack"],
"command": "/etc/sensu/plugins/check-procs.rb -p cron",
"subscribers": [
"production",
"webservers",
],
"interval": 60
}
}
}
I have restarted my server after making the changes. I'm assuming that this cron will hit every minute and call the slack notification plugin, but don't know what I'm missing, or where to put the .json doc from the Slack plugin "documentation"
https://github.com/sensu-plugins/sensu-plugins-slack
Any help getting me to the right direction?
You need a handler on the Sensu Server that will fire the request to Slack. Have you created that? If yes, please post it's content.
So I just solved this. benishkey did provide the solution in the link, however, just in case anyone comes across this and the link is broken, I thought I would add the solution.
-github user eugene-chow:
The Slack handler's config need to be named differently. Try the JSON below. I renamed the Slack config for each environment, and then pointed the handler to the respective config with -j config_name
{
"handlers": {
"slack-staging": {
"type": "pipe",
"command": "/usr/local/bin/handler-slack.rb -j slack-staging",
"severites": ["critical", "unknown"]
}
},
"slack-staging": {
"webhook_url": "https://hooks.slack.com/services/...",
"template" : ""
}
}
{
"handlers": {
"slack-production": {
"type": "pipe",
"command": "/usr/local/bin/handler-slack.rb -j slack-production",
"severites": ["critical", "unknown"]
}
},
"slack-production": {
"webhook_url": "https://hooks.slack.com/services/...",
"template" : ""
}
}
I dropped the handler-slack.rb file in with my checks and referenced it from there because it wasn't in my /usr/local/bin/ folder
I was facing the same issue, so the answer is already given but maybe help someone in the future,
First, install sensu slack plugin
/opt/sensu/embedded/bin/gem install sensu-plugins-slack
Then, Create a handler config file
vim /etc/sensu/conf.d/slack-handler.json
handler-slack.rb https://github.com/sensu-plugins/sensu-plugins-slack/blob/master/bin/handler-slack.rb
{
"handlers": {
"slack": {
"type": "pipe",
"command": "/opt/sensu/embedded/bin/handler-slack.rb",
"severites": ["critical", "unknown"]
}
},
"slack": {
"webhook_url": "https://your_webhook.com/abc",
"template" : ""
}
}
I found the answer in the "issues" section in Git
https://github.com/sensu-plugins/sensu-plugins-slack/issues/7