0

Best practice (python) for file-system repetative tasks?

What is the best approach for a python utility script - checking for existence of new file and processing? Normally, this process would be done with cron or a scheduled task, which is not allowed in this environment. Is there a recommended or best practice to handle this task? Currently, the script uses a "infinite loop" with a sleep timer in between executions, with an optional execution upper limit -- if the limit is set, it will only loop "n" times before quitting the process. I am hoping that there is a better way to do this. Limitations: This is a Linux platform, and we don't have a lot of the system-level access/utilities. Cron is not allowed. Installed version is Python 2.7.5 (until I can convince the owners that this will be a liability and that they should upgrade to a newer version). Reasons: I don't like "infinite loops". I think that using the "sleep" is a questionable "make do" practice. I feel that these are essentially taking control out of the programmers hands and leaving things hanging at the mercy of external events. I don't know how well these perform at the system level. This is on a data processing server with mainly ETL processing running. Does anybody have any thoughts / approaches? I am open if the answer is "Nope, you don't really have alternatives in this case."

15th Nov 2019, 5:54 PM
JoeSponge
JoeSponge - avatar
1 Réponse