Sphinx autocomplete search - sphinx

I'm trying to do a google-style autocomplete search with sphinx and ajax.
Say user is looking for an iphone. The goal is that input like "ip", "iph", "ipho" must give me the result, but it does not, while "iphon" or "iphone" do.
So, what am i doing wrong here?
index product
{
source = product
path = /var/lib/sphinx/product
docinfo = extern
mlock = 0
morphology = stem_enru
min_word_len = 2
charset_type = utf-8
charset_table = 0..9, A..Z->a..z, _, a..z, U+410..U+42F->U+430..U+44F, U+430..U+44F
min_prefix_len = 2
max_substring_len = 6
enable_star = 1
}
and the query
$sphinx = new SphinxClient();`
$sphinx -> SetLimits (0,1500,2500);
$sphinx->SetServer('localhost', 9312);
$sphinx->SetMatchMode(SPH_MATCH_EXTENDED);
$sphinx->SetSortMode(SPH_SORT_RELEVANCE);
$sphinx->SetFieldWeights(array ('name' => 30, 'brand' => 20, 'parent_name' => 10, 'description' => 5));
$result = $sphinx->Query($string, '*');

Related

Get list of feusers with typoscript, can't get the names of the associated usergroups

I am generating this list of feusers and trying to get the names of the associated usergroups. This code worked before the update to TYPO3 8.x (it's just the relevant part of the whole thing):
40 = TEXT
40.field = usergroup
40.split {
token = ,
cObjNum = 1 || 2
1 {
10 = CONTENT
10.table = fe_groups
10.select.pidInList = 22
10.select.andWhere.current = 1
10.select.andWhere.wrap = uid=|
10.select.where = (title NOT LIKE 'Netzwerk')
10.renderObj = TEXT
10.renderObj.field = title
10.renderObj.wrap = |, <br />
}
2 < .1
2.10.renderObj.wrap >
}
With TYPO3 8 the 'andWhere' is depreciated and so I tried like this, but failed:
40 = TEXT
40.field = usergroup
40.split {
token = ,
cObjNum = 1 || 2
1{
10 = CONTENT
10 {
table = fe_groups
select {
pidInList = 22
where.current = 1
where.wrap = uid= |
}
10.renderObj = TEXT
10.renderObj.field = title
10.renderObj.wrap = |,
}
2 < .1
2.10.renderObj.wrap
}
}
Thanks for pointing me in the right direction.
You should not split up the comma separated values, but go for uidInList instead. This way you get rid of the surrounding split and fetch the elements within just one go.
40 = CONTENT
40 {
table = fe_groups
select {
uidInList.field = usergroup
pidInList = 22
}
renderObj = TEXT
renderObj.field = title
renderObj.wrap = |,
}

Search special chars by space in sphinx

i have problem with sphinx search.
I have string for indexing
xyz a'qwerty
I need to find it if i use
xy - ok
xy a - ok
xyz a'qwerty - ok
xyz a qwerty - ok
xyz a qwe - not ok
I rly can't reach right result, know someone how to do this?
My index look like this, regex_filters was some experiments so, can be removed.
index ProductSearch
{
source = ProductSearchSource
path = c:/wamp/sphinx/data/product
docinfo = extern
enable_star = 0
expand_keywords = 1
min_word_len = 2
min_prefix_len = 1
charset_type = utf-8
charset_table = 0..9, A..Z->a..z, _, a..z, U+0022, U+0026, U+0027, U+0060, U+00B4, U+002E, U+0e1->a, U+0c1->a, U+10d->c, U+10c->c, U+10f->d, U+10e->d, U+0e9->e, U+0c9->e, U+11b->e, U+11a->e, U+0ed->i, U+0cd->i, U+148->n, U+147->n, U+0f3->o, U+0d3->o, U+159->r, U+158->r, U+161->s, U+160->s, U+165->t, U+164->t, U+0fa->u, U+0da->u, U+16f->u, U+16e->u, U+0fd->y, U+0dd->y, U+17e->z, U+17d->z,
wordforms = c:/wamp/www/project/configs/sphinx/synonyms
regexp_filter = (\w*)'(\w*) => \1'\2
regexp_filter = (\w*)'(\w*) => \1 \2
regexp_filter = (\w*)'(\w*) => \1
regexp_filter = (\w*)'(\w*) => \2
}
Using SPH_MATCH_EXTENDED2
PS.: Sorry for bad english
Problem solved, I missed synonyms in wordforms, it rewrites my tested word, so it looked like sphinx doesn't work correctly.. (Facepalm here)

F# SQLProvider Columns Order Doesn't match the order in the table

I select from a postgresql view\table and export the values into excel file.
The excel file column order need to be the same as the table, but the sqlProvider select them with abc order...
My Code is:
module ViewToExcel
open System
open System.IO
//open Microsoft.Office.Interop.Excel
open System.Drawing
open Npgsql
open FSharp.Data.Sql
open OfficeOpenXml
open Casaubon
open NpgsqlTypes
let [<Literal>] connectionString = #"Server=localhost;Database=db;User Id=postgres;Password=;"
let [<Literal>] npgPath = #"..\packages\Npgsql.3.1.7\lib\net451"
type sqlConnection = SqlDataProvider<ConnectionString = connectionString,
DatabaseVendor = Common.DatabaseProviderTypes.POSTGRESQL,
ResolutionPath = npgPath,
IndividualsAmount = 1000,
UseOptionTypes = true>
let functionParseViewToExcel (excelPath:string, serverName:string, dbName:string) =
/////////////////////////////////Get Data Connection///////////////////////
printf "connect to db\n"
let connectionUserString = #"Server="+serverName+";Database="+dbName+";User Id=postgres;Password=;"
let ctx = sqlConnection.GetDataContext(connectionUserString)
let weekCalcView = ctx.Public.CcVibeWeeklyCalculations
// weekCalcView|> Seq.toList
let weekCalcViewSeq = ctx.Public.CcVibeWeeklyCalculations|> Seq.toArray
////////////////////////////////// Start Excel//////////////////////////////
let newExcelFile = FileInfo(excelPath + "cc_vibe_treatment_period_"+ DateTime.Today.ToString("yyyy_dd_MM")+".xlsx");
if (newExcelFile.Exists) then
newExcelFile.Delete();
let pck = new ExcelPackage(newExcelFile);
//Add the 'xxx' sheet
let ws = pck.Workbook.Worksheets.Add("xxx");
//printf "success to start the excel file\n"
let mutable columNames = "blabla"
for col in weekCalcViewSeq.[0].ColumnValues do
let columnName = match col with |(a, _) -> a
//printf "a %A\n" columnName
let columnNamewithPsic = "," + columnName
columNames <- columNames + columnNamewithPsic
ws.Cells.[1, 1].LoadFromText(columNames.Replace("blabla,",""))|> ignore
ws.Row(1).Style.Fill.PatternType <- Style.ExcelFillStyle.Solid
ws.Row(1).Style.Fill.BackgroundColor.SetColor(Color.FromArgb(170, 170, 170))
ws.Row(1).Style.Font.Bold <- true;
ws.Row(1).Style.Font.UnderLine <- true;
let mutable subject = weekCalcViewSeq.[0].StudySubjectLabel.Value // in order to color the rows according to subjects
let mutable color = 0
for row in 1.. weekCalcViewSeq.Length do
let mutable columValues = "blabla"
for col in weekCalcViewSeq.[row-1].ColumnValues do
let columnValue = match col with |(_, a) -> a
//printf "a %A\n" columnValue
match columnValue with
| null -> columValues <- columValues + "," + ""
| _ -> columValues <- columValues + "," + columnValue.ToString()
ws.Cells.[row + 1, 1].LoadFromText(columValues.Replace("blabla,",""))|> ignore
/////////////////////Color the row according to subject///////////////
if (weekCalcViewSeq.[row - 1].StudySubjectLabel.Value = subject) then
if (color = 0) then
ws.Row(row + 1).Style.Fill.PatternType <- Style.ExcelFillStyle.Solid
ws.Row(row + 1).Style.Fill.BackgroundColor.SetColor(Color.FromArgb(255,255,204))
else
ws.Row(row + 1).Style.Fill.PatternType <- Style.ExcelFillStyle.Solid
ws.Row(row + 1).Style.Fill.BackgroundColor.SetColor(Color.White)
else
subject <- weekCalcViewSeq.[row - 1].StudySubjectLabel.Value
if (color = 0) then
color <- 1
ws.Row(row + 1).Style.Fill.PatternType <- Style.ExcelFillStyle.Solid
ws.Row(row + 1).Style.Fill.BackgroundColor.SetColor(Color.White)
else
color <- 0
ws.Row(row + 1).Style.Fill.PatternType <- Style.ExcelFillStyle.Solid
ws.Row(row + 1).Style.Fill.BackgroundColor.SetColor(Color.FromArgb(255,255,204))
pck.Save()
The Excel Output fields is:
bloating_avg,caps_fail,caps_success,date_of_baseline_visit,discomfort_avg and etc...
But the order in the table isn't the same.
Could someone help me?
Thanks!
You can write a small helper function to extract the field (column) names via npgqsl. After that you can just use this list of column names to create your excel table. The getColNames function gets it from a DataReader. Obviously you can refactor it further, to get at the tablename as parameter, etc.
#r #"..\packages\SQLProvider.1.0.33\lib\FSharp.Data.SqlProvider.dll"
#r #"..\packages\Npgsql.3.1.7\lib\net451\Npgsql.dll"
open System
open FSharp.Data.Sql
open Npgsql
open NpgsqlTypes
let conn = new NpgsqlConnection("Host=localhost;Username=postgres;Password=root;Database=postgres;Pooling=false")
conn.Open()
let cmd = new NpgsqlCommand()
cmd.Connection <- conn
cmd.CommandText <- """ SELECT * FROM public."TestTable1" """
let recs = cmd.ExecuteReader()
let getColNames (recs:NpgsqlDataReader) =
let columns = recs.GetColumnSchema() |> Seq.toList
columns |> List.map (fun x -> x.BaseColumnName)
let colnames = getColNames recs
//val colnames : string list = ["ID"; "DT"; "ADAY"]
rec.Dispose()
conn.Dispose()
You can see that the column names are not in alphabetical order. You could use this column name list to get at the records in the correct order. Or just use the Reader object directly without the type provider.
Edit: Using records to map the table
It is also possible to extract the data, using the type provider, in the required format, by wiring up the types, and then using .MapTo<T>:
type DataRec = {
DT:DateTime
ADAY:String
ID:System.Int64
}
type sql = SqlDataProvider<dbVendor,connString2,"",resPath,indivAmount,useOptTypes>
let ctx = sql.GetDataContext()
let table1 = ctx.Public.TestTable1
let qry = query { for row in table1 do
select row} |> Seq.map (fun x -> x.MapTo<DataRec>())
qry |> Seq.toList
val it : DataRec list = [{DT = 2016/09/27 00:00:00;
ADAY = "Tuesday";
ID = 8L;}; {DT = 2016/09/26 00:00:00;
ADAY = "Monday";
ID = 9L;}; {DT = 2016/09/25 00:00:00;
ADAY = "Sunday";

Sphinx internal error/ query not send to searchd

I'm trying to use Sphinx with a service called questasy (nobody will know it). Our dutch colleges did this before and the software is definitely giving us the opportunity to run searches via Sphinx.
So here the problem I got:
I set up the questasy portal, enabled the questasy usage and the portal runs perfectly.
I unpacked Sphinx to C:/Sphinx, created the /data and /log directories.
I set up the config file and ran the indexer. It works.
I installed searchd as a service with the config and it works and runs.
BUT now when I try to search in the portal it shows me a message like "internal error. Please try again later". When I look into the "Query.log" there is nothing in it, so I think the query isn't send to the searchd-service. I checked the config, I checked the port it is listening on and everything is like our colleges got it too.
Does anybody know about a common bug or problem or something like this we missed??
Here is my .conf:
# Questasy configuration file for sphinx
#
# To handle the Sphinx requirement that every document have a unique 32-bit ID,
# use a unique number for each index as the first 8 bits, and then use
# the normal index from the database for the last 24 bits.
# Here is the list of "index ids"
# 1 - English Question Text
# 2 - Dutch Question Text
# 3 - Concepts
# 4 - Variables
# 5 - Study Units
# 6 - Publications
#
# The full index will combine all of these indexes
#
# COMMANDS
# To index all of the files (when searchd is not running), use the command:
# indexer.exe --config qbase.conf --all
# To index all of the files (when searchd is running), use the command:
# indexer.exe --config qbase.conf --all --rotate
# Set up searchd as a service with the command
# searchd.exe --install --config c:\full\path\to\qbase.conf
# Stop searchd service with the command
# searchd.exe --stop --config c:\full\path\to\qbase.conf
# Remove searchd service with the command
# searchd.exe --delete --config c:\full\path\to\qbase.conf
# To just run searchd for development/testing
# searchd.exe --config qbase.conf
# base class with basic connection information
source base_source
{
type = mysql
sql_host = localhost
sql_user = root
sql_pass =
sql_db = questasy
sql_port = 3306 # optional, default is 3306
}
# Query for English Question Text
source questions_english : base_source
{
sql_query = SELECT ((1<<24)|QuestionItem.id) as id, StudyUnit.id as study_unit_id, QuestionItem.lh_text_1 as question_text, GROUP_CONCAT(Code.lt_label_1 SEPARATOR ' ') as answer_text FROM `question_items` AS `QuestionItem` LEFT JOIN `question_schemes` AS `QuestionScheme` ON (`QuestionItem`.`question_scheme_id` = `QuestionScheme`.`id`) LEFT JOIN `data_collections` AS `DataCollection` ON (`DataCollection`.`id` = `QuestionScheme`.`data_collection_id`) LEFT JOIN `study_units` AS `StudyUnit` ON (`StudyUnit`.`id` = `DataCollection`.`study_unit_id`) LEFT JOIN `response_domains` AS `ResponseDomain` ON (`QuestionItem`.`response_domain_id` = `ResponseDomain`.`id`) LEFT JOIN `code_schemes` As `CodeScheme` ON (`ResponseDomain`.`code_scheme_id` = `CodeScheme`.`id` AND `ResponseDomain`.`domain_type`=4) LEFT JOIN `codes` AS `Code` ON (`Code`.`code_scheme_id` = `CodeScheme`.`id`) WHERE `StudyUnit`.`published` >= 20 GROUP BY QuestionItem.id
sql_attr_uint = study_unit_id
# sql_query_info = SELECT CONCAT('/question_items/view/',$id) AS URL
}
# Query for Dutch Question Text
source questions_dutch : base_source
{
sql_query = SELECT ((2<<24)|QuestionItem.id) as id, StudyUnit.id as study_unit_id, QuestionItem.lh_text_2 as question_text, GROUP_CONCAT(Code.lt_label_2 SEPARATOR ' ') as answer_text FROM `question_items` AS `QuestionItem` LEFT JOIN `question_schemes` AS `QuestionScheme` ON (`QuestionItem`.`question_scheme_id` = `QuestionScheme`.`id`) LEFT JOIN `data_collections` AS `DataCollection` ON (`DataCollection`.`id` = `QuestionScheme`.`data_collection_id`) LEFT JOIN `study_units` AS `StudyUnit` ON (`StudyUnit`.`id` = `DataCollection`.`study_unit_id`) LEFT JOIN `response_domains` AS `ResponseDomain` ON (`QuestionItem`.`response_domain_id` = `ResponseDomain`.`id`) LEFT JOIN `code_schemes` As `CodeScheme` ON (`ResponseDomain`.`code_scheme_id` = `CodeScheme`.`id` AND `ResponseDomain`.`domain_type`=4) LEFT JOIN `codes` AS `Code` ON (`Code`.`code_scheme_id` = `CodeScheme`.`id`) WHERE `StudyUnit`.`published` >= 20 GROUP BY QuestionItem.id
sql_attr_uint = study_unit_id
# sql_query_info = SELECT CONCAT('/question_items/view/',$id) AS URL
}
# Query for Concepts
source concepts : base_source
{
sql_query = SELECT ((3<<24)|Concept.id) as id, Concept.lt_label_1 as concept_label, Concept.lh_description_1 as concept_description FROM `concepts` AS `Concept`
# sql_query_info = SELECT CONCAT('/concepts/view/',$id) AS URL
}
# Query for Data Variable
source variables : base_source
{
sql_query = SELECT ((4<<24)|DataVariable.id) as id, StudyUnit.id as study_unit_id, DataVariable.name as variable_name, DataVariable.lh_label_1 as variable_label FROM `data_variables` AS `DataVariable` LEFT JOIN `variable_schemes` AS `VariableScheme` ON (`DataVariable`.`variable_scheme_id` = `VariableScheme`.`id`) LEFT JOIN `base_logical_products` AS `BaseLogicalProduct` ON (`BaseLogicalProduct`.`id` = `VariableScheme`.`base_logical_product_id`) LEFT JOIN `study_units` AS `StudyUnit` ON (`StudyUnit`.`id` = `BaseLogicalProduct`.`study_unit_id`) WHERE `StudyUnit`.`published` >= 15
sql_attr_uint = study_unit_id
# sql_query_info = SELECT CONCAT('/data_variables/view/',$id) AS URL
}
# Query for Study Units
source study_units : base_source
{
sql_query = SELECT ((5<<24)|StudyUnit.id) as id, StudyUnit.id as study_unit_id, StudyUnit.fulltitle as study_unit_name, StudyUnit.subtitle as study_unit_subtitle, StudyUnit.alternate_title AS study_unit_alternatetitle, StudyUnit.lh_note_1 as study_unit_note, StudyUnit.lh_purpose_1 as study_unit_purpose, StudyUnit.lh_abstract_1 as study_unit_abstract, StudyUnit.creator as study_unit_creator FROM study_units AS StudyUnit WHERE `StudyUnit`.`published` >= 10
sql_attr_uint = study_unit_id
# sql_query_info = SELECT CONCAT('/study_units/view/',$id) AS URL
}
# Query for Publications
source publications : base_source
{
sql_query = SELECT ((6<<24)|Publication.id) as id, Publication.id as publication_id, Publication.title as publication_name, Publication.subtitle as publication_subtitle, Publication.creator as publication_creator, Publication.contributor as publication_contributor, Publication.abstract as publication_abstract, Publication.lh_note_1 as publication_note, Publication.source as publication_source FROM publications AS Publication WHERE NOT(`Publication`.`accepted_timestamp` IS NULL)
# sql_query_info = SELECT CONCAT('/publications/view/',$id) AS URL
}
# Query for Hosted Files - Other materials
source other_materials : base_source
{
sql_query = SELECT ((7<<24)|HostedFile.id) as id, OtherMaterial.title as hosted_file_title, HostedFile.name as hosted_file_name, StudyUnit.id as study_unit_id FROM `hosted_files` as `HostedFile`, `other_materials` as OtherMaterial, `study_units` as `StudyUnit` WHERE OtherMaterial.hosted_file_id = HostedFile.id AND OtherMaterial.study_unit_id = StudyUnit.id AND `StudyUnit`.`published` >= 20
sql_attr_uint = study_unit_id
# sql_query_info = SELECT CONCAT('/hosted_files/download/',$id) AS URL
}
# Query for Hosted Files - Datasets
source physical_instances : base_source
{
sql_query = SELECT ((8<<24)|HostedFile.id) as id, PhysicalInstance.name as hosted_file_name, StudyUnit.id as study_unit_id FROM `hosted_files` as `HostedFile`, `physical_instances` as PhysicalInstance, `study_units` as `StudyUnit` WHERE PhysicalInstance.hosted_file_id = HostedFile.id AND PhysicalInstance.study_unit_id = StudyUnit.id AND `StudyUnit`.`published` >= 20
sql_attr_uint = study_unit_id
# sql_query_info = SELECT CONCAT('/hosted_files/download/',$id) AS URL
}
# Query for Physical Data Products (Variable Schemes)
source physical_data_products : base_source
{
sql_query = SELECT ((9<<24)| PhysicalDataProduct.id) as id, PhysicalDataProduct.name FROM `physical_data_products` AS `PhysicalDataProduct`, `study_units` as `StudyUnit` WHERE PhysicalDataProduct.study_unit_id = StudyUnit.id AND PhysicalDataProduct.deleted = 0 AND StudyUnit.published >= 20
}
# English Question Text Index
index questions_english_index
{
source = questions_english
path = C:\Sphinx\data\questions_english_index
docinfo = extern
mlock = 0
morphology = stem_en
min_word_len = 3
min_prefix_len = 0
min_infix_len = 3
# enable_star = 1
html_strip = 1
# charset_type = utf-8
}
# Dutch Question Text Index
index questions_dutch_index
{
source = questions_dutch
path = C:\Sphinx\data\questions_dutch_index
docinfo = extern
mlock = 0
morphology = stem_en
min_word_len = 3
min_prefix_len = 0
min_infix_len = 3
# enable_star = 1
html_strip = 1
# charset_type = utf-8
}
# Concept Index
index concepts_index
{
source = concepts
path = C:\Sphinx\data\concepts_index
docinfo = extern
mlock = 0
morphology = stem_en
min_word_len = 3
min_prefix_len = 0
min_infix_len = 3
# enable_star = 1
html_strip = 1
# charset_type = utf-8
}
# Variable Index
index variables_index
{
source = variables
path = C:\Sphinx\data\variables_index
docinfo = extern
mlock = 0
morphology = stem_en
min_word_len = 3
min_prefix_len = 0
min_infix_len = 3
# enable_star = 1
html_strip = 1
# charset_type = utf-8
}
# Study Unit Index
index study_units_index
{
source = study_units
path = C:\Sphinx\data\study_units_index
docinfo = extern
mlock = 0
morphology = stem_en
min_word_len = 3
min_prefix_len = 0
min_infix_len = 3
# enable_star = 1
html_strip = 1
# charset_type = utf-8
}
# Publication Index
index publications_index
{
source = publications
path = C:\Sphinx\data\publications_index
docinfo = extern
mlock = 0
morphology = stem_en
min_word_len = 3
min_prefix_len = 0
min_infix_len = 3
# enable_star = 1
html_strip = 1
# charset_type = utf-8
}
# Other Materials Index
index other_materials_index
{
source = other_materials
path = C:\Sphinx\data\other_materials_index
docinfo = extern
mlock = 0
morphology = stem_en
min_word_len = 3
min_prefix_len = 0
min_infix_len = 3
# enable_star = 1
html_strip = 1
# charset_type = utf-8
}
# Datasets file Index
index physical_instances_index
{
source = physical_instances
path = C:\Sphinx\data\physical_instances_index
docinfo = extern
mlock = 0
morphology = stem_en
min_word_len = 3
min_prefix_len = 0
min_infix_len = 3
# enable_star = 1
html_strip = 1
# charset_type = utf-8
}
# Datasets Index
index physical_data_products_index
{
source = physical_data_products
path = C:\Sphinx\data\physical_data_products_index
docinfo = extern
mlock = 0
morphology = stem_en
min_word_len = 3
min_prefix_len = 0
min_infix_len = 3
# enable_star = 1
html_strip = 1
# charset_type = utf-8
}
# Full Index - merge all of the other indexes
index full_index
{
type = distributed
local = questions_english_index
local = questions_dutch_index
local = concepts_index
local = variables_index
local = study_units_index
local = publications_index
local = other_materials_index
local = physical_instances_index
local = physical_data_products_index
}
indexer
{
# memory limit, in bytes, kiloytes (16384K) or megabytes (256M)
# optional, default is 32M, max is 2047M, recommended is 256M to 1024M
mem_limit = 256M
# maximum IO calls per second (for I/O throttling)
# optional, default is 0 (unlimited)
#
# max_iops = 40
# maximum IO call size, bytes (for I/O throttling)
# optional, default is 0 (unlimited)
#
# max_iosize = 1048576
}
# Settings for the searchd service
searchd
{
# port = 3312
log = C:\Sphinx\log\searchd.log
query_log = C:\Sphinx\log\query.log
pid_file = C:\Sphinx\log\searchd.pid
listen = 127.0.0.1
}
# C:\Sphinx\bin\searchd --config C:\xampp\htdocs\sphinx\vendors\questasy.conf
Thanks in advance

Typo3: Can't remove link from YAML-Header in

Like the title says I have a website made with Typo3. I have a link in the yaml-header an can't remove it, because I don't find where it is added.
I know that in Template View on the Root Page in the Constant Editor the sites displayed there can be edited:
And I also know that it's possible to add some links with Typo-Script, looks similar to that I think:
lib.header.20.30 = TEXT
lib.header.20.30.value = Link1 Name
lib.header.20.30.typolink.parameter = http://link1.ziel
The Problem is, that I don't find the link I want to remove not in the headernavigationincludelist neither in the Typo-Script.
Constants:
### Change message, if user did not fill out mandatory fields:
styles.content.mailform.badMess = Leere Pflichtfelder:
[globalVar = GP:L = 1]
styles.content.mailform.badMess = You must fill in these fields:
[global]
[globalVar = GP:L = 2]
styles.content.mailform.badMess = Change Me:
[global]
### Change settings of Dropdown Sitemap extension:
plugin {
tx_dropdownsitemap_pi1 {
picture {
params = hspace="5" vspace="0" border="0"
}
}
}
### Begin of standard constants
### Only Yes/No options are listed here, for others see constant editor.
## searchbox
searchBoxOff = 0
## header
topNavOff = 0
firstHeaderImageOff = 0
secondHeaderImageOff = 0
linkFirstImageToggle = 1
noGifBuilderForFirstHeaderImage = 1
noGifBuilderForSecondHeaderImage = 1
## teaser
teaserOff = 1
rootlineOff = 0
languageMenuOff = 0
fontSizeSelectorOff = 0
dateAndTimeOff = 0
## basics
selectorBoxOff = 0
tabNavigationOff = 0
subMenuOff = 0
menuHeadlineOff = 1
subMenuExpandToggle = 0
footerOff = 0
## languages
languageLocaleStandardLang = german
languageLocaleFirstLang = english
languageLocaleSecondLang = french
languageIsoCodeStandardLang = de
languageIsoCodeFirstLang = en
languageIsoCodeSecondLang = fr
## headlines
replaceH1withImage = 0
replaceH2withImage = 0
replaceH3withImage = 0
replaceH4withImage = 0
replaceH5withImage = 0
## statistics
statisticsSetting = 0
statApacheSetting = 0
statMysqlSetting = 0
## expert settings
userAdmPanelOn = 1
userIndexingOn = 1
userIndexExternalsOn = 0
userDisablePrefComm = 0
yamlDebugOn = 0
yamlFillerLinkOn = 0
footerFirstLangHtmlCode = <div class="left">Born Informatik AG, Berner Technopark, Morgenstrasse 129, CH-3018 Bern</div><div class="right">Copyright © 2008 Born Informatik AG</div>
footerStandardLangHtmlCode = <div class="left">Born Informatik AG, Berner Technopark, Morgenstrasse 129, CH-3018 Bern</div><div class="right">Copyright © 2008 Born Informatik AG</div>
footerSecondLangHtmlCode = <div class="left">Born Informatik AG, Berner Technopark, Morgenstrasse 129, CH-3018 Bern</div><div class="right">Copyright © 2008 Born Informatik AG</div>
searchPagePID = 32
plugin.tt_news.archiveTypoLink.parameter = 32
styles.content.imgtext.maxW = 410
plugin.wtsnowstorm.pid = 1,1
plugin.tx_srlanguagemenu_pi1.showCurrent = 0
plugin.meta = name=google-site-verification
headerNavigationIncludeList = 128, 31, 33, 34
TSConstantEditor.yaml-header.5 = topNavOff, headerNavigationIncludeList
Setup:
###############################
# Delete default styles of
# Plugin dropdown sitemap
###############################
plugin.tx_dropdownsitemap_pi1._CSS_DEFAULT_STYLE >
###############################
# Delete default styles of
# cssstyledcontent (Copied to content.css, in order to be able to modify them there.)
###############################
plugin.tx_cssstyledcontent._CSS_DEFAULT_STYLE >
###############################
# Configuration of Statistics
###############################
page.headerData.100 < plugin.tx_kestats_pi1
###############################
# metatags-config
# Insert your own data here.
###############################
plugin.meta {
flags.useSecondaryDescKey = 0
flags.alwaysGlobalDescription = 1
flags.alwaysGlobalKeywords = 1
global.author = Born Informatik AG
global.email =
global.copyright = Born Informatik AG
global.keywords = Born Informatik AG
global.description = Born Informatik AG
global.revisit = 2 days
global.robots = index,follow
global.language = {$languageIsoCodeStandardLang}
}
#### Change language, keywords and description for first foreign language
[globalVar = GP:L = {$firstForeignLanguage}]
plugin.meta.global.language = {$languageIsoCodeFirstLang}
plugin.meta.global.keywords = my keywords for first foreign language
plugin.meta.global.description = my description for first foreign language
[global]
#### Change language, keywords and description for second foreign language
[globalVar = GP:L = {$secondForeignLanguage}]
plugin.meta.global.language = {$languageIsoCodeSecondLang}
plugin.meta.global.keywords = my keywords for second foreign language
plugin.meta.global.description = my description for second foreign language
[global]
page.headerData.999 < plugin.meta
###############################
# Configuration of newloginbox
###############################
plugin.tx_newloginbox_pi1._CSS_DEFAULT_STYLE >
plugin.tx_newloginbox_pi3._CSS_DEFAULT_STYLE >
###############################
# Configuration of tt_news
###############################
plugin.tt_news {
_CSS_DEFAULT_STYLE >
usePagesRelations = 1
usePiBasePagebrowser = 1
archiveTitleCObject {
10.strftime = %B - %Y
}
getRelatedCObject {
10.1.20.strftime = %d.%m.%y %H:%M
10.2.20.strftime = %d.%m.%y %H:%M
10.default.20.strftime = %d.%m.%y %H:%M
}
displaySingle {
date_stdWrap.strftime= %d.%m.%Y
time_stdWrap.strftime= %H:%M
age_stdWrap.age = Minuten | Stunden | Tage | Jahre
}
displayLatest {
date_stdWrap.strftime= %d.%m.%y
time_stdWrap.strftime= %H:%M
}
displayList {
date_stdWrap.strftime= %A %d. %B %Y
time_stdWrap.strftime= %d.%m.%y %H:%M
}
}
plugin.tt_news {
catOrderBy = title
displayCatMenu {
catmenuRootIconFile = EXT:tt_news/res/tt_news_cat.gif
catmenuNoRootIcon = 0
catmenuIconMode = -1
}
}
plugin.tt_news {
pageBrowser {
dontLinkActivePage = 1
maxPages = 10
showRange = 0
showPBrowserText = 1
showResultCount = 0
showFirstLast = 0
}
}
plugin.tt_news.displayLatest.subheader_stdWrap.crop = 100 | ... | 1
#### Change news-settings for first foreign language
[globalVar = GP:L = {$firstForeignLanguage}]
plugin.tt_news.getRelatedCObject.10.1.20.strftime = %d.%m.%y %H:%M
plugin.tt_news.getRelatedCObject.10.2.20.strftime = %d.%m.%y %H:%M
plugin.tt_news.getRelatedCObject.10.default.20.strftime = %d.%m.%y %H:%M
plugin.tt_news.displaySingle.date_stdWrap.strftime= %d.%m.%Y
plugin.tt_news.displaySingle.time_stdWrap.strftime= %H:%M
plugin.tt_news.displaySingle.age_stdWrap.age = Minutes | Hours | Days | Years
plugin.tt_news.displayLatest.date_stdWrap.strftime= %m/%d/%y
plugin.tt_news.displayLatest.time_stdWrap.strftime= %H:%M
plugin.tt_news.displayList.date_stdWrap.strftime= %A %d. %B %Y
plugin.tt_news.displayList.time_stdWrap.strftime= %d.%m.%y %H:%M
[global]
#### Change news-settings for second foreign language
[globalVar = GP:L = {$secondForeignLanguage}]
plugin.tt_news.displaySingle.age_stdWrap.age = Minutes | Heures | Jours | Ans
[global]
################################
# Configuration of indexedsearch
################################
plugin.tx_indexedsearch {
_CSS_DEFAULT_STYLE >
_DEFAULT_PI_VARS.results = 10
forwardSearchWordsInResultLink = 1
blind {
type=-1
defOp=0
sections=0
media=1
order=-1
group=-1
extResume=-1
lang=-1
desc=-1
results=0
}
show {
rules=0
parsetimes=1
L2sections=1
L1sections=1
LxALLtypes=0
clearSearchBox = 0
clearSearchBox.enableSubSearchCheckBox=0
}
search {
rootPidList =
}
}
## CSS for rgtabs was moved and edited in content.css
plugin.tx_rgtabs_pi1.pathToCSS >
lib.nav.20.1.wrap = <ul><li class="home"><span>Home</span></li>|</ul>
lib.nav.20.1.ACT.allWrap = <li class="active">|</li>
lib.nav.20.wrap = <div id="navmain">|</div>
lib.nav.20.excludeUidList = 2
lib.submenu.10.30.1.ACTIFSUB = 1
lib.submenu.10.30.1.ACTIFSUB.allWrap = <strong>|</strong><span class="hidden">.</span>
lib.submenu.10.30.1.ACTIFSUB >
lib.submenu.10.20.wrap = <li id="title">|</li><li id="separator">|</li>
plugin.tt_news.displayList.date_stdWrap.strftime = %d.%m.%Y
page.headerData.19 = TEXT
page.headerData.19.value = <link rel="SHORTCUT ICON" href="http://born.ch/fileadmin/img/icons/favicon.ico">
page.headerData.19.value = <link rel="SHORTCUT ICON" href="fileadmin/img/icons/favicon.ico">
#awstats congig
config.stat = 1
config.stat_apache = 1
config.stat_apache_logfile = intranet.log
# SNOWFLAKES!!
snowstorm = PAGE
snowstorm {
typeNum = 3136
10 < plugin.tx_wtsnowstorm
config {
disableAllHeaderCode = 1
disablePrefixComment = 1
xhtml_cleaning = 0
admPanel = 0
}
}
# Add javascript file to html header
page.headerData.3136 = TEXT
page.headerData.3136 {
wrap = <script src="|" type="text/javascript"></script>
typolink.parameter.data = page : uid
typolink.additionalParams = &type=3136
typolink.addQueryString = 1
typolink.returnLast = url
}
page.10 >
page.headerData.28.value = <script type="text/javascript" src="fileadmin/scripts/jquery-1.3.2.min.js"></script>
lib.footer.100.value = <div class="left">Born Informatik AG, Berner Technopark, Morgenstrasse 129, CH-3018 Bern</div><div class="right">Copyright © 2008-2011 Born Informatik AG</div>
config.sys_language_overlay = 1
lib.header.20.30 < lib.teaser.20.10
lib.header.20.30.languagesUidsList = 0,2
lib.header.20.30.defaultLayout = 2
lib.header.20.30.flag.CUR.doNotLinkIt = 1
lib.header.20.30.link.CUR.doNotLinkIt = 1
lib.header.20.30.links.CUR.doNotLinkIt = 1
lib.header.20.30.link.NO.stdWrap.wrap = | <div class="NO"> | </div>
page.headerData.2 = TEXT
page.headerData.2.insertData=1
page.headerData.2.case=lower
page.headerData.2.wrap = <meta name="google-site-verification" content="zc2lFQCXoXPUZrGCU-axHs4hoYSvruh2UsU9WgM_6VE">
page.headerData.3 = TEXT
page.headerData.3.insertData=1
page.headerData.3.case=lower
page.headerData.3.wrap = <meta name="google-site-verification" content="KCjWqRAjA0I77QRa9C909EPmEuX-UXb3vO213VBZeEg">
The link I want to remove ist the first one, the Intranet link:
Thanks in advance, if you need more infos to Help me just say what you need.
You surely have cleared the cache, I guess?
I'm wondering why in your listed TS constants there are five page-ids:
headerNavigationIncludeList = 30, 128, 31, 33, 34
The 30 is the ID of page "Intranet", I guess. It seems, your editing of the constants via the "Constant Editor" was not successful.