Dependabot does not find latest commit - github

I'm exploring how Dependabot works and it isn't working as I expect.
I've created 2 private Golang repos (one, two) with one depending on two:
one's go.mod:
module github.com/dazwilkin/one
go 1.17
require github.com/dazwilkin/two v0.0.0-20210927170438-e7aa41e4107b
NOTE e7aa41e4107b is a prior commit intentionally in order to check VS Code's and Dependabot's update checking.
And dependabot.yml:
version: 2
updates:
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: "daily"
two's 2 most recent commits are:
curl \
--silent \
--header "Authorization: Bearer ${TOKEN}" \
https://api.github.com/repos/${OWNER}/${REPO}/commits \
| jq -r '.[]|{"sha":.sha,"date":.commit.committer.date}'
And:
{
"sha": "b2f2074829aa61218d7e38f27bb6051ccd97ab7a",
"date": "2021-09-27T18:03:33Z"
}
{
"sha": "e7aa41e4107b8c28f99cadfe55b831380730e808",
"date": "2021-09-27T17:04:38Z"
}
NOTE b2f2074829aa is the commit I'm expecting to be told about and e7aa41e4107b is the prior commit on two that one continues to reference.
VS Code quickly determined that an update is available and:
go list -m -u all
github.com/dazwilkin/one
github.com/dazwilkin/two v0.0.0-20210927170438-e7aa41e4107b [v0.0.0-20210927180333-b2f2074829aa]
NOTE Correctly identifying the latest commit (b2f2074829aa) to replace the prior commit (e7aa41e4107b)
But, after 22 hours and repeated forced updates, dependabot continues to report that e7aa41e4107b is current:
updater | INFO <job_214390230> Starting job processing
updater | INFO <job_214390230> Starting update job for DazWilkin/one
updater | INFO <job_214390230> Checking if github.com/dazwilkin/two 0.0.0-20210927170438-e7aa41e4107b needs updating
updater | INFO <job_214390230> Latest version is 0.0.0-20210927170438-e7aa41e4107b
updater | INFO <job_214390230> No update needed for github.com/dazwilkin/two 0.0.0-20210927170438-e7aa41e4107b
updater | INFO <job_214390230> Finished job processing
NOTE Dependabot appears to have no issue accessing github.com/dazwilkin/two but it doesn't find the most recent commit.
Is this just an eventual consistency issue and I need wait longer?
Update I've waited another 24 hours and it continues to find the earlier commit as the latest version
Or am I misunderstanding or misconfiguring Dependabot?
One perhaps relevant issue that my GitHub account is mixed-case DazWilkin but, for simplicity, I'm publishing and referencing the Golang Modules using all-lowercase (github.com/dazwilkin). However Dependabot appears to have no issues finding the prior commit.

I believe this is because dependabot doesn't support pseudoversions - https://github.com/dependabot/dependabot-core/issues/3017

Related

Github latest release is not same as a newest release

This is the situation.
30 minutes ago, I made a release note with tag name v4.2.4
then, just now I make a new release note with tag name 2022-07-18-0013 (this tag name is just about date, my company sometime
use this style version)
As far as I know latest release meaning that the newest release note, But In my case only semantic version(v4.2.4) can have latest tag.
why this is happened?
I can not find any rules about only semantic version has privilege to get a latest.
(I want to know why this is happened, because I use latest release Github API)
------------- EDIT ----------------
git log --oneline print result below
0bc82b8 Merge pull request #166 from devstefancho/feature/0718_test1
2e85d9a add
6cc313e add
4c7e5b2 Merge pull request #165 from devstefancho/feature/0717_test2
f018fca test
b403615 Merge pull request #163 from devstefancho/feature/0717_test2
e7dd66f test
git log --graph --oneline
* 0bc82b8 Merge pull request #166 from devstefancho/feature/0718_test1
|\
| * 2e85d9a add
|/
* 6cc313e add
* 4c7e5b2 Merge pull request #165 from devstefancho/feature/0717_test2
|\
| * f018fca test
* | b403615 Merge pull request #163 from devstefancho/feature/0717_test2
|\|
| * e7dd66f test
|/
------------------- Solved --------------------
Thanks for the great answer, finally figure out!
Reason : same day timestamp
If tags are not created in a same day, then the newest(by time) tag will be latest tag
This information was provided by a GitHub staff member:
Releases are based on Git tags, which mark a specific point in your repository’s history. The sort order of tags is as follows:
Tags are sorted by the timestamp of the underlying commit that they point to
If those commits are created on the same day, then the sorting is based on Semantic Versioning of the name of the tag (https://semver.org/)
If the Semantic Versioning is the same, they are sorted by second of creation
Pre-release versions have a lower precedence than the associated normal version.

Automatically reconnect failed tasks in Kafka-Connect

I'm using a mongo-source plugin with Kafka-connect.
I checked the source task state, and it was running and listening on a mongo collection.
I manually stopped mongod service and waited about 1 minute, then I start it back again.
I checked the source task to see if anything will fix itself, and after 30 minutes nothing seems to work.
Only after restarting the connector it started working again.
Since, mongo-source doesn't have the options to set retries + backoff when timeout, I searched for a configuration that will fit a simple scenario: restart failed task after X time using Kafka-connect configuration. couldn't find any.. :/
I can do that with a simple script, but there must be something in Kafka-connect that manages failed tasks. or even in mongo-source... I don't want it to fail so fast after just 1 minute... :/
There isn't any way other than using the REST API to find a failed task and submit a restart request - and then running this on a periodic basis. For example
curl -s "http://localhost:8083/connectors?expand=status" | \
jq -c -M 'map({name: .status.name } + {tasks: .status.tasks}) | .[] | {task: ((.tasks[]) + {name: .name})} | select(.task.state=="FAILED") | {name: .task.name, task_id: .task.id|tostring} | ("/connectors/"+ .name + "/tasks/" + .task_id + "/restart")' | \
xargs -I{connector_and_task} curl -v -X POST "http://localhost:8083"\{connector_and_task\}
Source: https://rmoff.net/2019/06/06/automatically-restarting-failed-kafka-connect-tasks/

FIWARE Orion Runtime Error

I am using FIWARE Orion (in a docker image) and I am facing with the possibility of losing some records. I looked in the log and came with a number of errors like the following:
time=Sunday 17 Dec 21:03:13 2017.743Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=safeMongo.cpp[287]:setStringVector | msg=Runtime Error (element 0 in array was supposed to be an string but type=3 from caller mongoSubCacheItemInsert:225)
According to http://fiware-orion.readthedocs.io/en/0.26.1/admin/logs/ these kind of errors (Runtime) "may cause the Context Broker to fail" and "should be reported to the Orion development team using the appropriate channel" and this is exactly what I am doing.
Any help, will be highly appreciated.
Thank you very much in advance.
EDIT: Orion version is 1.5.0-next
EDIT: It has been upgraded to 1.10.0
EDIT: After executing ps ax | grep contextBroker I receive the following results:
23470 ? Ssl 4:24 /usr/bin/contextBroker -fg -multiservice -dbhost mongodb
EDIT: The problem occurs periodically. Actually, it takes place exactly every minute:
time=Wednesday 20 Dec 20:50:27 2017.235Z
time=Wednesday 20 Dec 20:51:27 2017.237Z
etc.
Orion 1.5.0-next means some running version between 1.5.0 (released in October 2016) and 1.6.0 (released in December 2016). In the best case, your version is one year old, which is pretty much time.
Thus, I recommend you to upgrade to the newest available Orion version (in the moment of writting this, that version is 1.10.0, released in December 2017). We have solved some "overlogging" problems in the delta of changes between 1.6.0 and 1.10.0 and the one you mention could be one of them.
If the problem stills after upgrading, tell about it in a comment to the answer and we'll keep debuging.
Diagnosis
The 60 seconds periodicity is exactly the subscriptions cache refresh interval with default configuration (your CLI confirms your are not using different setting for subscriptions cache).
Looking in detail to the line refered by the log trace in Orion 1.10.0 source code:
setStringVectorF(sub, CSUB_CONDITIONS, &(cSubP->notifyConditionV));
The log error means that Orion expects an array of strings for the CSUB_CONDITIONS field in a document of the subscription collection at database, but some of the elements in the array (or all) aren't strings but a objects (type 3 means object, as BSON specification details).
CSUB_CONDITIONS constant corresponds to conditions field at DB. Note this field changed at Orion 1.3.0. Before 1.3.0, for instance 1.2.0, it was an array of objects:
"conditions" : [
{
"type" : "ONCHANGE",
"value" : [ "temperature " ]
}
]
From 1.3.0 on, it was simplified to an array of strings:
"conditions" : [ "temperature" ]
So my hypothesis is that in some moment in the past that Orion instance was updated crossing the 1.3.0 boundary but without applying the procedure to migrate data (or the procedure was applied but failed in some way).
Solution
Given that you are in a situtation in which your data at Orion database is probably inconsistent, the cleanest solution would be to remove your database. Or, at least, the csubscollection.
However, this is possible only in the case you can regenerate the data to be deleted in an easy way. If that is not feasible, you can try with the procedure to migrate data. In particular, the csub_merge_condvalues.py script should fix the problem although I'd recommend to apply the full procedure in order to fix other potential inconsistencies.
Take into account that the migration procedure was designed to be applied before start using the new Orion version. It seems you have been using post-1.3.0 Orion with pre-1.3.0 data for a time, so your data can have evolved in some unexpected way the procedure couldn't fix. Anyway, even in this case the procedure is better than nothing :)
Note that if you are using multiple services (it seems so for the -multiservice CLI parameter) you have to apply the clean/migration procedure to every per-service database.

Jenkins multibranch pipeline job CHANGE_ID not set

I have setup a job with Jenkins with MultiBranch pipeline.
Github is the SCM and is configured, with a webhook to fire a build on a PR commit. (Existing or new PR).
The build is triggered, all goes fine 1, however the CHANGE_ID is not set (null). We need the CHANGE_ID to pass on to Sonar.
I am struggling to understand, in which cases this parameter is set, and why it's null in our case.
Please consider this question from a Jenkins Multibranch perspective.
Our (git related) plugin installations is here [2] .
1 logging from Jenkins.
[Mon Jun 26 11:32:48 CEST 2017] Received Push event to branch BE-7394 in repository ServiceHouse/api UPDATED event from 172.18.0.1 ⇒ http://jenkins2.servicehouse.nl:8080/github-webhook/ with timestamp Mon Jun 26 11:32:43 CEST 2017
11:32:50 Connecting to https://api.github.com using shojenkinsuser/******
Looking up ServiceHouse/api
11:32:50 Connecting to https://api.github.com using shojenkinsuser/******
Looking up ServiceHouse/api
Getting remote branches...
Checking branch BE-7394
Getting remote branches...
Checking branch BE-7394
‘Jenkinsfile’ found
Met criteria
Changes detected: BE-7394 (01293286b6ee34056d8c92e21a6d39d18e537a81 → 35c16ef01bba5d27dd040a881cd3734fef271fd7)
Scheduled build for branch: BE-7394
0 branches were processed (query completed)
Done examining ServiceHouse/api
[2] Git related Installed plugins:
This variable sutup in the branch-api-plugin (setup source) and we have it working for pull requests or change requests.
For branches of the form -, it is not filled.
I can advise you to use:
BUILD_NUMBER
The current build number, such as "153"
BUILD_ID
The current build ID, identical to BUILD_NUMBER for builds created in
1.597+, but a YYYY-MM-DD_hh-mm-ss timestamp for older builds

FATAL: Invalid id: Process leaked file descriptors - Jenkins (github)

I am getting following error in jenkins with github.
Using strategy: Default
Last Built Revision: Revision 8d7d3f0898bc4583a80848033d6e0d27cc3e2096 (origin/master, origin/HEAD)
Fetching changes from 1 remote Git repository
Fetching upstream changes from origin
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Seen branch in repository origin/svn
Seen 3 remote branches
Commencing build of Revision 8d7d3f0898bc4583a80848033d6e0d27cc3e2096 (origin/master, origin/HEAD)
Checking out Revision 8d7d3f0898bc4583a80848033d6e0d27cc3e2096 (origin/master, origin/HEAD)
FATAL: Invalid id: Process leaked file descriptors. See http://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build for more information
java.lang.IllegalArgumentException: Invalid id: Process leaked file descriptors. See http://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build for more information
at org.eclipse.jgit.lib.ObjectId.fromString(ObjectId.java:232)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.doRevList(CliGitAPIImpl.java:959)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.revList(CliGitAPIImpl.java:945)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.isCommitInRepo(CliGitAPIImpl.java:970)
at hudson.plugins.git.GitAPI.isCommitInRepo(GitAPI.java:181)
at hudson.plugins.git.GitSCM.computeChangeLog(GitSCM.java:1292)
at hudson.plugins.git.GitSCM.access$1300(GitSCM.java:58)
at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1257)
at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1211)
at hudson.FilePath.act(FilePath.java:909)
at hudson.FilePath.act(FilePath.java:882)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1211)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1408)
at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:676)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:581)
at hudson.model.Run.execute(Run.java:1603)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:247)
Normally it works fine during the day and when I start my system again on next day, it gives this error and can only be resolved by re-installing the github in Jenkins. The link in the error message says about Spawning process from build but I don't understand what does that mean (I run 1-2 builds only; on windows).
Thanks.