How to access all polls that have been sent inside of a group - py-telegram-bot-api

this might be a newbie question
My goal is to make a bot that sends polls every day.
The polls I would like to send are from a group that me and a couple of friends used to set up manually, but now I would like to automate it without having to type all the previous questions all over again (so we can repost some questions)
What is the way to access these polls for later reuse ? I hope this is doable
Thanks in advance
I have set up the proper environment to send polls, and I tried messing around with the getMessage and getHistory methods but could not do anything fruitful

Related

No perfect tag for Messenger Platform’s policies

We have a lot of doubts concerning the changes in the Messenger Platform’s policies.
There is HUMAN_AGENT tag (for which we have already asked permission) which seems to be the one that adapts the most to our processes, but 7 days is still insufficient for us. Could we answer with this “message_tag” 20 days after a user message? What can we do in this case? We have to find a way not to leave the user without an answer.
We plan on using one of the above-mentioned CONFIRMED_EVENT_UPDATE to answer all user messages outside of the 24 hour window. Are there any penalties for us doing so? If there are, what are the penalties? Are they applied at the company level or the page level? None of the messages sent by our company contain what you want to avoid (spams, special offers, discounts, etc.) so we don’t think we should recieve any penalty even when using “message_tags”.
We have thought about using the normal answer and, if the “This message is sent outside of the allowed window” error message appears, we will answer using “message_tags”. Is there any problem for using the first call on a recurrent basis giving errors or should we avoid it? Avoiding it might cause to send unnecesary “message_tags”. Could we answer all private messages using HUMAN_AGENT when it is approved (our answers are always given by a customer service agent)?
Best regards
You do not mention your actual use case, so nobody can suggest any message tags that would match that use case.
Without knowing that use case the answer to your questions can only be:
1) There is no way to extend the 7 days window for human agent tag. If you get approved for it you have a 7 days windows, not 8 and not 20. However most user actions reset that window you should follow up within that window and and make sure the user engages with your bot so the window is reset and you have another 7 days for another update.
2) Abusing tags will most likely result in your page being restricted, make sure to only use them for the allowed use cases as listed in the docs: https://developers.facebook.com/docs/messenger-platform/send-messages/message-tags/

Stuck with understanding how to build a scalable system

I need some guidance on how to properly build out a system that will be able to scale. I will give you some information about what I am trying to do and then ask my specific question.
I have a site where I want visitors to send some data to be processed. They input the data into a textarea or upload it in a file. Simple. The data is somewhat preprocessed on the client side before a POST request is made to a REST endpoint.
What I am stuck on is what is a good way to take this posted data store it and then associate an id with it that references the user since I cannot process the data fast enough for it to be returned to the user in a reasonable amount of time?
This question is a bit vague and open to opinion, I admit it. I just need a push in the right direction to keep moving. What I have been considering is throwing the data into a message queue and then having some workers process the data elsewhere and when the data is processed alert the user as to where to find it with some sort of link to an S3 bucket or just a URL to a file. The other idea was to just run the request for each item to be processed against another end-point that already processes individual records in some sort of loop client side. The problem is as follows with this idea:
To process the data it may take somewhere from 30 minutes to 2 hours depending upon the amount that they want processed. It's not ideal for them to just sit there and wait for that to finish depending on the amount of records they need processed, so I have ruled this out mostly.
Any guidance would be very much appreciated as I don't have any coworkers to bounce things off of, nor do I know many people with the domain knowledge that I could freely ask. If this isn't the right place to ask this, could you point me in the right direction as to where it should be asked?
Chris
If I've got you right, your pipeline is:
Accept item from user
Possibly preprocess/validate it (?)
Put into some queue
Process data
Return result.
You man use one or several queues on stage (3). Entity from user gets added to one of the queues. If it's big enough, it could be stored in S3 or storage alike, and only info about it put into the queue: link, add date, user id (or email of alike). Processors can pull items from queue and give feedback to users.
If you have no strict requirements on order, things get much simpler: you don't need any sync between them. Treat all the components: upload acceptors, queues, storages and processors as independent pools of processes. Monitor each pool separately. If there's some bottlenecks - add machines to that pool.

Want to make an app that runs continuously, how?

I'm working on an app, and I want it to do the following things:
1) Automatically posts on a user's wall at a specific time (say 11:00am every day)
2) Whenever a post is posted on the user's wall, it takes some action
How can I do this? Basically, how can I make it post at a specific time? How can I make this an event that's triggered? Or could I use a server to constantly loop until it's the correct time?
I'm a noob, so please post as many details as possible! For example, could I do this using Google App Engine?
Running continuously on appengine can be done with backends, but you should probably use cron to trigger tasks to be run at specific times/intervals. See "Scheduled Tasks With Cron for Python" https://developers.google.com/appengine/docs/python/config/cron
You can setup max and min instances to 1-1 but you need to have business account to do that. https://developers.google.com/appengine/docs/adminconsole/instances
Disadvantage is that you are billed for existing instance 24/7/365 hours (as instance is resident) and you will not scale.

New/Read Flags in CQRS

I am currently drafting a concept for a (mostly) HTML-based collaboration suite which I plan to implement using CQRS. This software will contain messages that can be sent to the user (which can either be read or unread, obviously) and other elements which shall be marked "new" if they were created after the last user login.
Hardly something new, but I am not quite sure how that would be correctly implemented using CQRS. As I understand it, Change of any kind should, without exception, only be possible via Commands. But creating commands for every single (new) element that is being accessed seems a bit too much, not to mention the overhead.
I don't know if I need it, but what would be the best way to implement a Last-Accessed Timestamp on elements. Basically the same problem like the above, with the difference that the change happens EVERY time the element is accessed, not only the first time for each user.
CQRS seems to be an awesome concept but it really needs more learning material. Can't wait till a book is released :)
Regards
[Edit] No one? Wouldn't have thought that this is such a complicated issue..
I assume you're using event-sourcing in which case once you allow your query-service/event-handlers to raise appropriate events then this becomes fairly easy to solve.
For your messages/elements; when handling the specific creation events of your elements either add to existing or create additional event-handlers, to store to a messages read-model with a status of new and appropriate information about the element.
As part of you're user login I don't see why you can't raise a user-logged-in event (from the security/query service depending on how your implementing authentication) to say the user has logged in. An event-handler could capture this and write the last-login timestamp to a specific user-last-login read-model.
In addition the user-logged-in event-handler would need to update all the new messages (for that user) to an unread status. Seeing as we're changing the status of the messages as the user logs in do you still need to store the last-login timestamp?
For your last-accessed timestamp, perhaps you could just work this into your query service as queries for your different elements complete. Raise a query-completed event with element id/type information.

Best practices to follow/read large mailing-lists?

You're probably a lot to be subscribers to various mailing list, some more updated than others.
What are your best practices to follow all information going by these lists?
What are the best clients you've used to managed that?
I'm sure I'm not the only one trying to get the best signal out of this noisy way of communication :)
I like gmail because of the way it groups messages by conversation so I can just page down through a thread.
Use a rule in GMail to slap a label on and archive all of them. Then they are easily sortable, searchable, and threaded.
I just use Thunderbird. For some lists, in flat mode, for others (the Lua mailing list), in threaded mode. Following is natural for mailing list, the messages are pushed to your client.
At first, I just received the messages and routed them to the right folder with some rules.
Now, I read them as newsgroups using Gmane, which also allow to catch up history (including mails which were sent before my subscription started and those which were sent during a temporary unsubscription).
Sometime, when a thread has no interest for me, I just right click on the first message and select Mark all messages of this thread as read.
Using KDE Ia m using Kontact for my mail and RSS feeds. That gives me a nice command center.