Rewrite the application to be less greedy in the number of requests it submit to the server, make (better) use of caching. That’ll probably lower the number of concurrent request that have to be handled.
Rewrite the application to be less greedy in the number of requests it submit to the server, make (better) use of caching. That’ll probably lower the number of concurrent request that have to be handled.
Even a Real Time Operating System cannot guarantee serial/network input will arrive in time.
Is this for an opensource software project, and if so can you tell more about the project?
If that’s for a work or university project, you should share salary and/or credit with whoever is going to give you a solution.
Good point, that’s another difference between the two. Although you can probably achieve the same result with both.
Not depending on the cloud processing your data is more important in my opinion.
… and Python that actually gets executed on your machine, not someone else’s machine (ie the cloud).
Wrong choices happen when there’s deletion of useful historical data, motivated by short-term cost saving.
Wrong choices also happen when there’s unnecessary creation on data, such as logging and storing everything, just in case, with a verbose level.
Storage can be cheap in some cases, but high-availablility high-performance cloud storage is very expensive. Anyway, it’s not infinite.
The way to keep useful data is to be strategic and only store relevant logs. Fine tune retention policy especially for fastest growing data. Storing everything on high-cost storage, without smart retention policy, could lead to deleting git data to make place for a mix of debug logs and random shit.