• 0 Posts
  • 3 Comments
Joined 2 months ago
cake
Cake day: December 6th, 2024

help-circle

  • The expression “back to baseline” comes from Science and Engineering and literally means that something has gone back to the previous average flat level (for example: “the power line noise level spiked when your turned the machine on but is now back to baseline”)

    Edit: not average, but actually specifically the original flat level below which things would not fall. Sorry, it’s kinda hard to explain in words but very easy to point out in a graph or a scope were it’s just this flat line to which things always return.

    That expression makes sense if you’re talking about the rate of growth itself (i.e. the Lemmy rate of growth spiked at the time of the Reddit changes and eventually went back to baseline, since Lemmy is not growing any faster now than before the Reddit changes) but it doesn’t make sense if you’re talking about user numbers since the number of Lemmy users grew a lot with the Reddit changes and never went back to the average before them, not even close.

    Your original post is not clear on which of those things you’re talking about when you wrote “back to baseline” and your subsequent posts are mainly talking about user numbers, giving the idea that that’s what your “back to baseline” is refering to, in which case you’re using that expression incorrectly.


  • Look for a processor for the same socket that supports more RAM and make sure the Motherboard can handle it - maybe you’re lucky and it’s not a limit of that architecture.

    If that won’t work, breakup your self-hosting needs into multiple machines and add another second hand or cheap machine to the pile.

    I’ve worked in designing computer systems to handle tons of data and requests and often the only reasonable solution is to break up the load and throw more machines at it (for example, when serving millions of requests on a website, just put a load balancer in front of it that assigns user sessions and associated requests to multiple machines, so the load balancer pretty much just routes request by user session whilst the heavy processing stuff is done by multiple machines in such a way the you can just expand the whole thing by adding more machines).

    In a self-hosting scenario I suspect you’ll have a lot of margin for expansion by splitting services into multiple hosts and using stuff like network shared drives in the background for shared data, before you have to fully upgrade a host machine because you hit that architecture’s maximum memory.

    Granted, if a single service whose load can’t be broken down so that you can run it as a cluster, needs more memory than you can put in any of your machines, then you’re stuck having to get a new machine, but even then by splitting services you can get a machine with a newer architecture that can handle more memory but is still cheap (such as a cheap mini-PC) and just move that memory-heavy service to it whilst leaving CPU intensive services in the old but more powerful machine.