Cocoapods pod repo push git - swift

I am trying to push my pod to local repo. Before that, I have verified pod lib lint on my repo, and working fine locally
$ pod lib lint --swift-version=5.0 --allow-warnings
/System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/2.3.0/universal-darwin18/rbconfig.rb:215: warning: Insecure world writable dir /usr/local/sbin in PATH, mode 040777
-> SFLocationManager (1.0)
- WARN | source: Git SSH URLs will NOT work for people behind firewalls configured to only allow HTTP, therefore HTTPS is preferred.
- NOTE | xcodebuild: note: Using new build system
- NOTE | xcodebuild: note: Planning build
- NOTE | xcodebuild: note: Constructing build description
SFLocationManager passed validation.
After this, I have created tags and pushed to server
$ git tag
0.1.0
0.1.1
1.0
Then I have tried to test pod repo push command for local repo, which got failed
$ pod repo push git#git.url.com:ankit.thakur/locationmanager.git SFLocationManager.podspec --allow-warnings --swift-version=5.0 --local-only
/System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/2.3.0/universal-darwin18/rbconfig.rb:215: warning: Insecure world writable dir /usr/local/sbin in PATH, mode 040777
Validating spec
-> SFLocationManager (1.0)
- WARN | source: Git SSH URLs will NOT work for people behind firewalls configured to only allow HTTP, therefore HTTPS is preferred.
- ERROR | file patterns: The `source_files` pattern did not match any file.
- NOTE | xcodebuild: note: Using new build system
- NOTE | xcodebuild: note: Planning build
- NOTE | xcodebuild: note: Constructing build description
[!] The `SFLocationManager.podspec` specification does not validate.
Then I removed --local-only flag and ran again, but still failed.
$ pod repo push git#git.url.com:ankit.thakur/locationmanager.git SFLocationManager.podspec --allow-warnings --swift-version=5.0
/System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/2.3.0/universal-darwin18/rbconfig.rb:215: warning: Insecure world writable dir /usr/local/sbin in PATH, mode 040777
Validating spec
-> SFLocationManager (1.0)
- WARN | source: Git SSH URLs will NOT work for people behind firewalls configured to only allow HTTP, therefore HTTPS is preferred.
- ERROR | file patterns: The `source_files` pattern did not match any file.
- NOTE | xcodebuild: note: Using new build system
- NOTE | xcodebuild: note: Planning build
- NOTE | xcodebuild: note: Constructing build description
[!] The `SFLocationManager.podspec` specification does not validate.
Here is the pod version
$ pod --version
/System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/2.3.0/universal-darwin18/rbconfig.rb:215: warning: Insecure world writable dir /usr/local/sbin in PATH, mode 040777
1.6.0
Here is the podspec file:
#
# Be sure to run `pod lib lint SFLocationManager.podspec' to ensure this is a
# valid spec before submitting.
#
# Any lines starting with a # are optional, but their use is encouraged
# To learn more about a Podspec see https://guides.cocoapods.org/syntax/podspec.html
#
Pod::Spec.new do |spec|
spec.name = 'SFLocationManager'
spec.version = '1.0'
spec.summary = 'SFLocationManager is location based library for iOS and Mac'
# This description is used to generate tags and improve search results.
# * Think: What does it do? Why did you write it? What is the focus?
# * Try to keep it short, snappy and to the point.
# * Write the description between the DESC delimiters below.
# * Finally, don't worry about the indent, CocoaPods strips it!
spec.description = <<-DESC
Location library in beta test version to fetch location with scheduled interval.
DESC
spec.homepage = 'https://git.url.com/ankit.thakur/locationmanager'
# spec.screenshots = 'www.example.com/screenshots_1', 'www.example.com/screenshots_2'
spec.license = { :type => 'MIT', :file => 'LICENSE' }
spec.author = { 'ankitthakur' => 'ankit.thakur#url.com' }
spec.source = { :git => 'git#git.url.com:ankit.thakur/locationmanager.git', :tag => spec.version.to_s }
# spec.social_media_url = 'https://twitter.com/<TWITTER_USERNAME>'
spec.requires_arc = true
spec.ios.deployment_target = '10.0'
spec.osx.deployment_target = '10.10'
spec.source_files = 'SFLocationManager/Sources/Common/**/*.swift'
# spec.ios.source_files = 'SFLocationManager/Sources/iOS/**/*.swift'
# spec.osx.source_files = 'SFLocationManager/Sources/OSX/**/*.swift'
# spec.resource_bundles = {
# 'SFLocationManager' => ['SFLocationManager/Assets/*.png']
# }
spec.frameworks = 'CoreLocation'
# spec.public_header_files = 'Pod/Classes/**/*.h'
# spec.frameworks = 'UIKit', 'MapKit'
# spec.dependency 'AFNetworking', '~> 2.3'
end
The response of spec.source_files is
$ ls -al SFLocationManager/Sources/Common/**/*.swift
-rw-r--r--# 1 ankitthakur staff 2710 Apr 25 18:02 SFLocationManager/Sources/Common/GeocoderUtils/Geocoder.swift
-rw-r--r--# 1 ankitthakur staff 613 Apr 25 18:21 SFLocationManager/Sources/Common/LocationManager/LocationConfiguration.swift
-rw-r--r--# 1 ankitthakur staff 324 Apr 25 18:02 SFLocationManager/Sources/Common/LocationManager/LocationError.swift
-rw-r--r--# 1 ankitthakur staff 241 Apr 25 18:02 SFLocationManager/Sources/Common/LocationManager/LocationEventType.swift
-rw-r--r--# 1 ankitthakur staff 7144 Apr 25 18:36 SFLocationManager/Sources/Common/LocationManager/LocationManager.swift
-rw-r--r--# 1 ankitthakur staff 4649 Apr 25 18:02 SFLocationManager/Sources/Common/Model/Location.swift
-rw-r--r--# 1 ankitthakur staff 3939 Apr 25 18:27 SFLocationManager/Sources/Common/Trigger/LocationTriggerManager.swift
As per suggestions in provided solutions, my updated Podspec is
#
# Be sure to run `pod lib lint SFLocationManager.podspec' to ensure this is a
# valid spec before submitting.
#
# Any lines starting with a # are optional, but their use is encouraged
# To learn more about a Podspec see https://guides.cocoapods.org/syntax/podspec.html
#
Pod::Spec.new do |spec|
spec.name = 'SFLocationManager'
spec.version = '1.0'
spec.summary = 'SFLocationManager is location based library for iOS and Mac'
# This description is used to generate tags and improve search results.
# * Think: What does it do? Why did you write it? What is the focus?
# * Try to keep it short, snappy and to the point.
# * Write the description between the DESC delimiters below.
# * Finally, don't worry about the indent, CocoaPods strips it!
spec.description = <<-DESC
Location library in beta test version to fetch location with scheduled interval.
DESC
spec.homepage = 'https://git.promobitech.com/ankit.thakur/locationmanager'
# spec.screenshots = 'www.example.com/screenshots_1', 'www.example.com/screenshots_2'
spec.license = { :type => 'MIT', :file => 'LICENSE' }
spec.author = { 'ankitthakur' => 'ankit.thakur#promobitech.com' }
spec.source = { :git => 'git#git.promobitech.com:ankit.thakur/locationmanager.git', :tag => spec.version.to_s }
# spec.social_media_url = 'https://twitter.com/<TWITTER_USERNAME>'
spec.requires_arc = true
spec.ios.deployment_target = '10.0'
spec.osx.deployment_target = '10.10'
spec.source_files = 'SFLocationManager/Sources/Common/GeocoderUtils/*.{swift}',
'SFLocationManager/Sources/Common/LocationManager/*.{swift}',
'SFLocationManager/Sources/Common/Model/*.{swift}',
'SFLocationManager/Sources/Common/Trigger/*.{swift}'
# spec.ios.source_files = 'SFLocationManager/Sources/iOS/**/*.{swift}'
# spec.osx.source_files = 'SFLocationManager/Sources/OSX/**/*.{swift}'
# spec.resource_bundles = {
# 'SFLocationManager' => ['SFLocationManager/Assets/*.png']
# }
spec.frameworks = 'CoreLocation'
# spec.public_header_files = 'Pod/Classes/**/*.h'
# spec.frameworks = 'UIKit', 'MapKit'
# spec.dependency 'AFNetworking', '~> 2.3'
end
but it is still not working.
Here is the my podspec file:
Admin:locationmanager ankitthakur$ ls -al
total 40
drwxr-xr-x 10 ankitthakur staff 320 Apr 25 20:38 .
drwxr-xr-x 9 ankitthakur staff 288 Apr 25 20:38 ..
-rw-r--r-- 1 ankitthakur staff 6148 Apr 25 20:38 .DS_Store
drwxr-xr-x 14 ankitthakur staff 448 Apr 26 14:50 .git
drwxr-xr-x 10 ankitthakur staff 320 Apr 25 20:38 Example
-rw-r--r-- 1 ankitthakur staff 1086 Apr 25 20:38 LICENSE
-rw-r--r-- 1 ankitthakur staff 1029 Apr 25 20:38 README.md
drwxr-xr-x 4 ankitthakur staff 128 Apr 25 20:51 SFLocationManager
-rw-r--r-- 1 ankitthakur staff 2241 Apr 26 14:49 SFLocationManager.podspec
lrwxr-xr-x 1 ankitthakur staff 27 Apr 25 20:38 _Pods.xcodeproj -> Example/Pods/Pods.xcodeproj

The error says:
file patterns: The `source_files` pattern did not match any file.
This means that you have written a wrong pattern.
So you should correct your source_files like the following
s.source_files = "FOLDERNAME/*.{swift}"
(This will include all the Swift files under the folder "FOLDERNAME")
In case you have multiple folders, do like the following:
s.source_files = "FOLDERNAME1/*.{swift}" , "FOLDERNAME2/*.{swift}"

Related

Telegraf inputs.tail with zimbra.log

I have some questions, how I can set telegraf.conf file for collect logs from the "zimbra.conf" file?
Now I tried to use this config text, but it does not work :(((
I want to send this logs to grafana
One of the lines "zimbra.conf" for example:
Oct 1 10:20:46 webmail postfix/smtp[7677]: BD5BAE9999: to=user#mail.com, relay=mo94.cloud.mail.com[92.97.907.14]:25, delay=0.73, delays=0.09/0.01/0.58/0.19, dsn=2.0.0, status=sent (250 2.0.0 Ok: queued as 4C25fk2pjFz32N5)
And I do not understand exactly how works the "grok_patterns ="
[[inputs.tail]]
files = ["/var/log/zimbra.log"]
from_beginning = false
grok_patterns = ['%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST} %{DATA:program}(?:\[%{POSINT}\])?: %{GREEDYDATA:message}']
name_override = "zimbra_access_log"
grok_custom_pattern_files = []
grok_custom_patterns = '''
TS_UNIX %{MONTH}%{SPACE}%{MONTHDAY}%{SPACE}%{HOUR}:%{MINUTE}:%{SECOND}
TS_CUSTOM %{MONTH}%{SPACE}%{MONTHDAY} %{HOUR}:%{MINUTE}:%{SECOND}
'''
grok_timezone = "Local"
data_format = "grok"
I have copied your example line into a log file called Prueba.txt wich contains the following lines:
Oct 3 00:52:32 webmail postfix/smtp[7677]: BD5BAE9999: to=user#mail.com, relay=mo94.cloud.mail.com[92.97.907.14]:25, delay=0.73, delays=0.09/0.01/0.58/0.19, dsn=2.0.0, status=sent (250 2.0$
Oct 13 06:25:01 webmail systemd-logind[949]: New session 229478 of user zimbra.
Oct 13 06:25:02 webmail zmconfigd[27437]: Shutting down. Received signal 15
Oct 13 06:25:02 webmail systemd-logind[949]: Removed session c296.
Oct 13 06:25:03 webmail sshd[28005]: Failed password for invalid user julianne from 120.131.2.210 port 10570 ssh2
I have been able to parse the data with this configuration of the tail.input plugin:
[[inputs.tail]]
files = ["Prueba.txt"]
from_beginning = true
data_format = "grok"
grok_patterns = ['%{TIMESTAMP_ZIMBRA} %{GREEDYDATA:source} %{DATA:program}(?:\[%{POSINT}\])?: %{GREEDYDATA:message}']
grok_custom_patterns = '''
TIMESTAMP_ZIMBRA (\w{3} \d{1,2} \d{2}:\d{2}:\d{2})
'''
name_override = "log_frames"
You need to match the input string with regular expressions. For that there are some predefined patters such as GREEDYDATA = .* that you can use to match your input (another example will be NUMBER = (?:%{BASE10NUM}) BASE16NUM (?<![0-9A-Fa-f])(?:[+-]?(?:0x)?(?:[0-9A-Fa-f]+))) . You can also define your own patterns in grok_custom_patterns. Take a look at this website with some patters: https://streamsets.com/documentation/datacollector/latest/help/datacollector/UserGuide/Apx-GrokPatterns/GrokPatterns_title.html
In this case I defined a TIMESTAMP_ZIMBRA pattern for matching Oct 3 00:52:32 and Oct 03 00:52:33 alike inputs.
Here is the collected metric by Prometheus:
# HELP log_frames_delay Telegraf collected metric
# TYPE log_frames_delay untyped
log_frames_delay{delays="0.09/0.01/0.58/0.19",dsn="2.0.0",host="localhost.localdomain",message="BD5BAE9999:",path="Prueba.txt",program="postfix/smtp",relay="mo94.cloud.mail.com[92.97.907.14]:25",source="webmail",status="sent (250 2.0.0 Ok: queued as 4C25fk2pjFz32N5)",to="user#mail.com"} 0.73
P.D.: Ensure that telegraf has access to the log files.

Can SparkContext.setCheckpointDir(hdfsPath) set same hdfsPath in different spark apps?

As docs of:
https://spark.apache.org/docs/2.2.1/api/java/org/apache/spark/SparkContext.html#setCheckpointDir-java.lang.String-
SparkContext:
setCheckpointDir
public void setCheckpointDir(String directory)
Set the directory under which RDDs are going to be checkpointed.
Parameters:
directory - path to the directory where checkpoint files will be stored (must be HDFS path if running in cluster)
Questions :
1) If different spark apps SparkContext.setCheckpointDir(hdfsPath) set the same hdfsPath, Is there any conflict?
2) If no conflict, the hdfsPath for CheckpointDir will clean automaticly?
Questions :
1) If different spark apps SparkContext.setCheckpointDir(hdfsPath) set the same hdfsPath, Is there any conflict?
Answer : No conflict as per below example given. Multiple applcaition can use same check point directory. Under that unique hash kind of folder will be created to avoid conflicts.
2) If no conflict, the hdfsPath for CheckpointDir will clean automaticly?
Answer : Yes its happening. for the below example I used local for demonstration... but local or hdfs it doesnt matter. Behaviour will be the same.
Lets go by example (ran multiple times with same check point directory):
package examples
import java.io.File
import org.apache.log4j.Level
object CheckPointTest extends App {
import org.apache.spark.sql.{Dataset, SparkSession}
val spark = SparkSession.builder().appName("CheckPointTest").master("local").getOrCreate()
val logger = org.apache.log4j.Logger.getLogger("org")
logger.setLevel(Level.WARN)
import spark.implicits._
spark.sparkContext.setCheckpointDir("/tmp/checkpoints")
val csvData1: Dataset[String] = spark.sparkContext.parallelize(
"""
|id
| a
| b
| c
""".stripMargin.lines.toList).toDS()
val frame1 = spark.read.option("header", true).option("inferSchema",true).csv(csvData1).show
val checkpointDir = spark.sparkContext.getCheckpointDir.get
println(checkpointDir)
println("Number of Files in Check Point Directory " + getListOfFiles(checkpointDir).length)
def getListOfFiles(dir: String):List[File] = {
val d = new File(dir)
if (d.exists && d.isDirectory) {
d.listFiles.filter(_.isFile).toList
} else {
List[File]()
}
}
}
Result :
+---+
| id|
+---+
| a|
| b|
| c|
+---+
file:/tmp/checkpoints/30e6f882-b49a-42cc-9e60-59adecf13166
Number of Files in Check Point Directory 0 // this indicates once application finished removed all the RDD/DS information.
If you have a look at checkpoint folder it will be like this...
user#f0189843ecbe [~/Downloads]$ ll /tmp/checkpoints/
total 0
drwxr-xr-x 2 user wheel 64 Mar 27 14:08 a2396c08-14b6-418a-b183-a90a4ca7dba3
drwxr-xr-x 2 user wheel 64 Mar 27 14:09 65c8ef5a-0e64-4e79-a050-7d1ee1d0e03d
drwxr-xr-x 2 user wheel 64 Mar 27 14:09 5667758c-180f-4c0b-8b3c-912afca59f55
drwxr-xr-x 2 user wheel 64 Mar 27 14:10 30e6f882-b49a-42cc-9e60-59adecf13166
drwxr-xr-x 6 user wheel 192 Mar 27 14:10 .
drwxrwxrwt 5 root wheel 160 Mar 27 14:10 ..
user#f0189843ecbe [~/Downloads]$ du -h /tmp/checkpoints/
0B /tmp/checkpoints//a2396c08-14b6-418a-b183-a90a4ca7dba3
0B /tmp/checkpoints//5667758c-180f-4c0b-8b3c-912afca59f55
0B /tmp/checkpoints//65c8ef5a-0e64-4e79-a050-7d1ee1d0e03d
0B /tmp/checkpoints//30e6f882-b49a-42cc-9e60-59adecf13166
0B /tmp/checkpoints/
Conclusion :
1) Even multiple applications are running parllel, there will be unique hash under check point directory in that all the RDD/DS
information will be stored.
2) Afer success full execution of each
Spark Application, the context cleaner will remove the contents in
it.. is what I observed from the above practical example.

what is the good way to edit the device tree ? and where is it ? (meta-sunxi)

I have currently build a Yocto core-minimal-images (with meta-sunxi) for an orange pi zero board(a cheap chinese board that i use for my studies)
https://github.com/linux-sunxi/meta-sunxi
And it succesfully boot on my board,but in the /dev directory i didnt have acces to the SPI NOR memory. After some search on the orange pi wiki i find that i need some line to my device tree : https://linux-sunxi.org/Orange_Pi_Zero#Installing_from_linux
&spi0 {
status = "okay";
flash: m25p80#0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "winbond,w25q128";
reg = <0>;
spi-max-frequency = <40000000>;
};
};
But i dont really understand how to proceed...because i don't find which files i need to edit ? and maybe this not a good idee ? i think its better to create a .bbappend recipes no ?
the information that i have gather by searching in the meta-sunxi directories:
in conf/orange-pi-zero/KERNEL_DEVICETREE = "sun8i-h2-plus-orangepi-zero.dtb"
but there is no "sun8i-h2-plus-orangepi-zero.dts" file in the meta-sunxi directories ?
"sun8i-h2-plus-orangepi-zero.dtb"file is present in /build/tmp/deploy/images/orange-pi-zero/ so i don't really know how it is generated ? is it only download by yocto ? ( no device tree compilation ? )
by serachin on the net i was able to find sun8i-h2-plus-orangepi-zero.dts
at: https://github.com/torvalds/linux/blob/master/arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts
and it contains theses interesting lines :
&spi0 {
/* Disable SPI NOR by default: it optional on Orange Pi Zero boards */
status = "disabled";
flash#0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "mxicy,mx25l1606e", "winbond,w25q128";
reg = <0>;
spi-max-frequency = <40000000>;
};
};
So maybe someone is able to give some advice to add SPI NOR support on my board ? what is the best way ? make some .bbappend ? or create my own meta by copying "meta-sunxi" and edit it ? and then which files i need to edit ?
thanks in advance for your time
Pierre.
Compiling the image with Yocto with the meta BSP layer pulls the kernel (checksout into tmp/work-shared/<MACHINE>/kernel-source/) and compiles it and you get the final output image which you can flash from tmp/deploy/images/<MACHINE>/. But in your case the mainline kernel doesn't enabled the SPI by default, so you need to enable it in the Linux Kernel source code.
If you have the Yocto build setup already, then you can edit the Device Tree and prepare the patch. You can move into tmp/work-shared/orange-pi-zero/kernel-source/ and edit the kernel source code and change
status = "okay";
and prepare the git patch using usually sequence,
git add arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts
git commit -s -m "Enable SPI by default"
git format-patch HEAD~
Then you append this patch in two ways.
Edit the recipes-kernel/linux/linux-mainline_git.bb and add your patch file into SRC_URI. Copy the patch file into recipes-kernel/linux/linux-mainline
If you don't want to edit the meta-sunxi layer, create linux-mainline_%.bbappend in your meta layer and do the same.
The below patch can be directly applied to meta-sunxi to fix this case. You can find the same here.
From 3a1a3515d33facdf8ec9ab9735fb9244c65521be Mon Sep 17 00:00:00 2001
From: Parthiban Nallathambi <parthiban#linumiz.com>
Date: Sat, 10 Nov 2018 12:20:41 +0100
Subject: [PATCH] orange pi zero: Add SPI support by default
Signed-off-by: Parthiban Nallathambi <parthiban#linumiz.com>
---
...rm-dts-enable-SPI-for-orange-pi-zero.patch | 26 +++++++++++++++++++
recipes-kernel/linux/linux-mainline_git.bb | 1 +
2 files changed, 27 insertions(+)
create mode 100644 recipes-kernel/linux/linux-mainline/0001-arm-dts-enable-SPI-for-orange-pi-zero.patch
diff --git a/recipes-kernel/linux/linux-mainline/0001-arm-dts-enable-SPI-for-orange-pi-zero.patch b/recipes-kernel/linux/linux-mainline/0001-arm-dts-enable-SPI-for-orange-pi-zero.patch
new file mode 100644
index 0000000..e6d7933
--- /dev/null
+++ b/recipes-kernel/linux/linux-mainline/0001-arm-dts-enable-SPI-for-orange-pi-zero.patch
## -0,0 +1,26 ##
+From 1676d9767686404211c769de40e6aa55642b63d5 Mon Sep 17 00:00:00 2001
+From: Parthiban Nallathambi <parthiban#linumiz.com>
+Date: Sat, 10 Nov 2018 12:16:36 +0100
+Subject: [PATCH] arm: dts: enable SPI for orange pi zero
+
+Signed-off-by: Parthiban Nallathambi <parthiban#linumiz.com>
+---
+ arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts b/arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts
+index 0bc031fe4c56..0036065da81c 100644
+--- a/arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts
++++ b/arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts
+## -144,7 +144,7 ##
+
+ &spi0 {
+ /* Disable SPI NOR by default: it optional on Orange Pi Zero boards */
+- status = "disabled";
++ status = "okay";
+
+ flash#0 {
+ #address-cells = <1>;
+--
+2.17.2
+
diff --git a/recipes-kernel/linux/linux-mainline_git.bb b/recipes-kernel/linux/linux-mainline_git.bb
index 5b8e321..9b2bcbe 100644
--- a/recipes-kernel/linux/linux-mainline_git.bb
+++ b/recipes-kernel/linux/linux-mainline_git.bb
## -27,5 +27,6 ## SRCREV_pn-${PN} = "b04e217704b7f879c6b91222b066983a44a7a09f"
SRC_URI = "git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git;protocol=git;branch=master \
file://defconfig \
+ file://0001-arm-dts-enable-SPI-for-orange-pi-zero.patch \
"
S = "${WORKDIR}/git"
--
2.17.2

MarkLogic: Unusual type of log in MarkLogic error log file?

I'm noticing some unusual type of log in MarkLogic error log file.
Thread: ACCESS_VIOLATION
# 1 0x0000000141830859 in xdmp::Tokenizer::operator++() (tokenizer.cpp:1599)
# 2 0x0000000140c605b4 in xdmp::RetokenizedAtomIterator::init() (datamodel.cpp:1494)
# 3 0x00000001405d0f83 in xdmp::FieldValueAtomIterator<xdmp::SwitchingAtomIterator>::firstNode()
(searchbuiltins.cpp:2700)
# 4 0x00000001405d1b23 in xdmp::FieldValueAtomIterator<xdmp::SwitchingAtomIterator>::nextNode()
(searchbuiltins.cpp:2790)
# 5 0x00000001405d1c31 in xdmp::FieldValueAtomIterator<xdmp::SwitchingAtomIterator>::operator++()
(searchbuiltins.cpp:2583)
# 6 0x0000000140652e38 in xdmp::FieldValueAtomIterator<xdmp::SwitchingAtomIterator>::getChars()
(searchbuiltins.cpp:2811)
# 7 0x000000014100ceff in xdmp::StandardIndexerEnv<xdmp::SwitchingAtomIterator>::getFieldChars()
(indexer.cpp:3044)
# 8 0x00000001410140b0 in xdmp::StandardIndexerEnv<xdmp::SwitchingAtomIterator>::putFieldRangeIndex()
(indexer.cpp:3056)
# 9 0x000000014103ac49 in xdmp::IndexerWalker<xdmp::SwitchingAtomIterator>::walkElemNodeEnd()
(indexer.cpp:11045)
# 10 0x000000014103f22d in xdmp::StandardIndexerEnv<xdmp::SwitchingAtomIterator>::pop()
(indexer.cpp:11642)
# 11 0x0000000141040027 in xdmp::IndexerWalker<xdmp::SwitchingAtomIterator>::walk()
(indexer.cpp:11833)
# 12 0x00000001410438e9 in xdmp::StandardIndexerEnv<xdmp::SwitchingAtomIterator>::putNode()
(indexer.cpp:2068)
# 13 0x00000001410468b2 in xdmp::StandardIndexerPostingData::putNode() (indexer.cpp:12300)
# 14 0x00000001418ade9b in xdmp::UpdatePath::fragmentInsert() (updatepath.cpp:335)
# 15 0x0000000140763208 in xdmp::refragment() (updatebuiltins.cpp:13237)
# 16 0x0000000140763505 in xdmp::refragment() (updatebuiltins.cpp:13319)
# 17 0x0000000140e24422 in xdmp::ForestReindexerThread::refragmentDocuments() (forest.cpp:22298)
# 18 0x0000000140e290da in xdmp::ForestReindexerThread::run() (forest.cpp:2628)
# 19 0x0000000141b18032 in svc::Thread::top() (thread.cpp:386)
# 20 0x0000000141b18279 in runThread() (thread.cpp:423)
# 21 0x0000000141d05453 in _callthreadstartex() (threadex.c:348)
# 22 0x0000000141d05507 in _threadstartex() (threadex.c:326)
It would be great if anyone can help me to fix this.

Simply distributed index: precached 0 indexes

I have two simple indexes:
First, 01.conf:
searchd
{
listen = 9301
listen = 9401:mysql41
pid_file = /var/run/sphinxsearch/searchd01.pid
log = /var/log/sphinxsearch/searchd01.log
query_log = /var/log/sphinxsearch/query01.log
binlog_path = /var/lib/sphinxsearch/data/test/01
}
source base
{
type = mysql
sql_host = localhost
sql_db = test
sql_user = root
sql_pass = toor
sql_query_pre = SET NAMES utf8
sql_attr_uint = group_id
}
source test : base
{
sql_query = \
SELECT id, group_id, UNIX_TIMESTAMP(date_added) AS date_added, title, content \
FROM documents WHERE id % 2 = 0
}
index test
{
source = test
path = /var/lib/sphinxsearch/data/test/01
}
Second looks like first but with "02" instead "01" in filename and inside.
And distributed index in 00.conf:
searchd
{
listen = 9305
listen = 9405:mysql41
pid_file = /var/run/sphinxsearch/searchd00.pid
log = /var/log/sphinxsearch/searchd00.log
query_log = /var/log/sphinxsearch/query00.log
binlog_path = /var/lib/sphinxsearch/data/test
}
index test
{
type = distributed
agent = 127.0.0.1:9301:test
agent = 127.0.0.1:9302:test
}
And I try to use distributed index:
sudo searchd --config /etc/sphinxsearch/d/00.conf --stop
sudo searchd --config /etc/sphinxsearch/d/01.conf --stop
sudo searchd --config /etc/sphinxsearch/d/02.conf --stop
sudo searchd --config /etc/sphinxsearch/d/01.conf
sudo searchd --config /etc/sphinxsearch/d/02.conf
sudo indexer --all --rotate --config /etc/sphinxsearch/d/01.conf
sudo indexer --all --rotate --config /etc/sphinxsearch/d/02.conf
sudo searchd --config /etc/sphinxsearch/d/00.conf
Unfortunately I obtain next output:
...
using config file '/etc/sphinxsearch/d/00.conf'...
listening on all interfaces, port=9305
listening on all interfaces, port=9405
precached 0 indexes in 0.000 sec
Why?
And when I try to search something with distributed index (9305):
no enabled local indexes to search.
And mysql indexes are works perfectly if I use them with port 9301 and 9302 respectively. But searching in distributed index returns nothing.
UPDATE
# tail /var/log/sphinxsearch/searchd00.log
[Thu Sep 29 23:43:04.599 2016] [ 2353] binlog: finished replaying /var/lib/sphinxsearch/data/test/binlog.001; 0.0 MB in 0.000 sec
[Thu Sep 29 23:43:04.599 2016] [ 2353] binlog: finished replaying total 4 in 0.000 sec
[Thu Sep 29 23:43:04.599 2016] [ 2353] accepting connections
[Thu Sep 29 23:43:24.336 2016] [ 2353] caught SIGTERM, shutting down
[Thu Sep 29 23:43:24.472 2016] [ 2353] shutdown complete
[Thu Sep 29 23:43:24.473 2016] [ 2352] watchdog: main process 2353 exited cleanly (exit code 0), shutting down
[Thu Sep 29 23:43:24.634 2016] [ 2404] watchdog: main process 2405 forked ok
[Thu Sep 29 23:43:24.635 2016] [ 2405] listening on all interfaces, port=9305
[Thu Sep 29 23:43:24.635 2016] [ 2405] listening on all interfaces, port=9405
[Thu Sep 29 23:43:24.636 2016] [ 2405] accepting connections
UPDATE2
Hmm... It seems what problem in querying data from Sphinx. Also I renamed distributed index into test1. Next code works well.
# mysql -h 127.0.0.1 -P 9405
mysql> select * from test1 where match ('one|two');
+------+----------+
| id | group_id |
+------+----------+
| 1 | 1 |
| 2 | 1 |
+------+----------+
2 rows in set (0,00 sec)
I think what problem was in old version of sphinxapi.php what I used.
precached 0 indexes in 0.000 sec
Well that it self, is normal. There are no local indexes to 'precache'. A distributed index has no index files to 'load' or (pre)cache.
... but searchd should still be running at the end of that. I think searchd should start up ok.
Try also checking
/var/log/sphinxsearch/searchd00.log
might have some more.
Although I suppose its possible sphinx will not startup without any real indexes (ie cant have JUST distributed index), so could just add a fake index to that config.