ExpressionEngine missing channel entries - tags

I am working on a new web app which is based in ExpressionEngine and for the most part I am basing the content on channel entries. However I am experiencing some very weird issues with the exp channel entries tag in that it is not returning all relevant entries each time. I can't figure out what's going on with it as the entries are definitely available when viewing them in the control panel, and they will also show up as requested in my template, but sometimes they just disappear and/or are not processed properly. This is the case for large and small sets of entries also, ranging from 3 channel entries which fit the criteria specified within the exp tag to 500 entries.
Any thoughts or feedback would be greatly appreciated.

There could be a number of things going on here so here are some things to look at, just in case;
If the entries have entry dates in the future - you'll need your channel entries tag to have the parameter show_future_entries = "yes"
Likewise if the entries are closed, or expired, you'll need to add show="open|closed"
Are you looking at a particular category and these entries aren't assigned to the category?
Are you looking at a particular category but have exlcuded category data from the entries tag
Are you retrieving more than 100 entries? There is a default limit of 100 entries returned unless you specify a limit parameter.

Related

Infopath Repeating Table Replacing Looked Up Values in all Rows; Values Based on Two Seperate Fields

Here's the thought process and believe it or not I had it working and then it broke, now I am not sure why or how to fix it.
I have a secondary data source that is housed in a SharePoint list that contains billing rates for various tiers depending on the customer.
When you initially select a Tier and then the subsequent Positions under the daily billing area all is fine and dandy and values look up as they should.
However as is often the case someone will input the wrong Tier for the billing sheet and it would need to be changed to reflect the actual Tier. When this happens InfoPath is grabbing the value from the first row for the lookup and making it the value for all the position rate fields in the repeating table.
You can change them back, by selecting a different position and going back to the original position, however that is another step.
Seriously, this is the last bit for me to sort out before I can put this project behind me. I've included some screenshots of the groups and how they are laid out and the rules I have setup.
[Rules for Rate Tier field](https://i.stack.imgur.com/DO7e4.png)
[Rules for Position field](https://i.stack.imgur.com/JF8bh.png)
[Groups and Fields layout](https://i.stack.imgur.com/7etEq.png)
[Example of issue I am having](https://i.stack.imgur.com/aKD3t.gif)
I've tried everything from filtering the rate returned by the Position in the secondary source matching to the Position in the form which is activated conditionally when the Rate Tier equals a specific Tier. I've tried eval which I am not familiar with so I either got true or false but it never returned field contents.

Grafana - Modifying "All" label in JSON Model

I am currently working on a Grafana project and I need to translate the "All" results into "Tous".
Example of what I need
The one that is currently working is because it's got a limited number of results
Working - filter with limited results
The other variables (filters) however are more dynamic and will evolve with time. Therefore, there are unlimited possible results and it's options array is empty.
Not working - filter with unlimited results
Everytime I change the "All" text value of my "fonds" filter (the one not working) to "Tous", it resets to "All" as soon as it has a chance
Has anyone ever encountered this problem?
Thanks in advance,
Jonathan

Filter for specific string and include 2 following rows

I'm doing some diagnostics for our application by searching for some specific messages on cloudwatch. A downside to searching for errormessages is that only the rows that match the string get returned, and sometimes valuable information is also included in the rows that were logged right after the log that was returned.
Is there a method of querying for a row with a specific value, and have a range of logs before and/or after this row included in the result?
The only way you can do this is to run another search without the filter, but restricting the time window to around the event that you found. At the moment, you cannot get the before/after log events through a single search.

REST pagination content duplicates

When creating REST application which will return a collection of items (topic with collection of posts) with sorting from new to old ones.
If there will be HATEOAS principles performed and all content will be chunked on pages client will get a current page id, offset, data limits and links to first, current and next page for example.
There is no problem to get data from next page, but if somebody has been added content while client is reading current page - data will be pushed on the start of collection and last item of current page will be moved to the next page.
If you will just skip posts which already has been loaded before, you will get lower amount of items on the next page. There is a way to get a count of pushed items in start of list and increment offset.
What is a best practices for this?
Not using offsets indexes, but instead skip tokens that indicate the first value not to include (or first value to include) is a good technique provided the value can be unique for every item in your result set and is an orderable field based on the current sort. But it's not flawless. Usually this just doesn't matter.
If it really does matter you have to put IDs of everything that's in the first page in the call to 2nd page, and again and again. HATEOAS helps you do stuff like this...but it can get very messy and still things can pop up on page 1 given the current sorting when you make a request for page 5...what do you do with that?
Another trick to avoid dupes in a UI is to use the self or canonical link relationships to uniquely identify resources in a page and compare those to existing resources in the UI. Updating the UI with the latest matching resources is usually a simple task. This of course puts some burden on the client.
There is not a 1 size fits all solution to this problem. You have to design for the UX you intend to fulfill.

Google Search Appliance (GSA) feeds - unpredictable behavior

We have a metadata-and-url feed and a content feed in our project. The indexing behaviour of the documents submitted using either feed is completely unpredictable. For the content feed, the documents get removed from the index after a random interval every time. For the metadata-and-url feed, the additional metadata we add is ignored, again randomly. The documents themselves do remain in index in the latter case - only our custom metadata gets removed. Basically, it looks like the feeds get "forgotten" by GSA after sometime. What could be the cause of this issue, and how do we go about debugging this?
Points to note:
1) Due to unavoidable reasons, our GSA index is always hovering around the license limit (+/- 1000 documents or so). Could this have an effect? Are feeds purged when nearing license limit? We do have "lock = true" set in the feed records though.
2) These fed documents are not linked to from pages and hence (I believe) would have low page rank. Are feeds automatically purged if not linked to from pages?
3) Our follow patterns include the fed documents.
4) We do not use action=delete with the same documents, so that possibility is ruled out. Also for the content feed we always post all the documents. So they are not removed through feeds.
When you hit the license limit the GSA will start dropping documents from the index so I'd say that's definitely your problem.