Rewrite the application to be less greedy in the number of requests it submit to the server, make (better) use of caching. That’ll probably lower the number of concurrent request that have to be handled.
- 0 Posts
- 5 Comments
Joined 3 years ago
Cake day: March 2nd, 2023
You are not logged in. If you use a Fediverse account that is able to follow users, you can follow this user.
Even a Real Time Operating System cannot guarantee serial/network input will arrive in time.
Is this for an opensource software project, and if so can you tell more about the project?
If that’s for a work or university project, you should share salary and/or credit with whoever is going to give you a solution.
Hirom@beehaw.orgto
Programming@beehaw.org•Microsoft Brings Python Programming To Excel, Enabling Advanced Data Analysis
1·2 years agoGood point, that’s another difference between the two. Although you can probably achieve the same result with both.
Not depending on the cloud processing your data is more important in my opinion.
Hirom@beehaw.orgto
Programming@beehaw.org•Microsoft Brings Python Programming To Excel, Enabling Advanced Data Analysis
9·2 years ago… and Python that actually gets executed on your machine, not someone else’s machine (ie the cloud).
Wrong choices happen when there’s deletion of useful historical data, motivated by short-term cost saving.
Wrong choices also happen when there’s unnecessary creation on data, such as logging and storing everything, just in case, with a verbose level.
Storage can be cheap in some cases, but high-availablility high-performance cloud storage is very expensive. Anyway, it’s not infinite.
The way to keep useful data is to be strategic and only store relevant logs. Fine tune retention policy especially for fastest growing data. Storing everything on high-cost storage, without smart retention policy, could lead to deleting git data to make place for a mix of debug logs and random shit.