I guess these guys are just plain old tools.
Someone interested in many things.
I guess these guys are just plain old tools.
The little I’ve seen of Joe seems like this:
Some rich guy you’ve never heard of: “So, umm, yeah, I’ve been trying this new form of yoga.”
Joe: hits blunt and drinks something harmful “Oh yeah?”
Guy 1: burp “Yeah, and it’s really opened my eyes and shit, y’know?”
Joe: “Oh really?”
(This but for who knows how long).
I forgot: are Lemmy’s active and hot sorts chronological? They’re pretty decent, but I do find stale content does get stuck on one that isn’t there on the other.
Tbh, I haven’t really had this issue in a few weeks. I’m tempted to think it’s usage-related, and could possibly indicate that my memory allocation for the DB is still too high.
That’s not universally true, at least if you’re not on the same LAN. For example, most small-scale apps hosted on VPSs are typically configured with a public-facing SSH login.
You can always brute force the SSH login and take a look around yourself. If you leave an apology.txt file in /home, I’m sure the admin won’t mind.
You can if you want. Reply here with the link if you do (or mention me if that’s a thing on Lemmy).
Yeah, mine have technically happened after reboots, although things typically take a few days at least for the problem to creep up. This past time, I basically have a whole entire week in before things went to crap.
I did that a while ago, and unfortunately, it didn’t really help. I don’t think it’s an issue of RAM, but rather a daemon or something periodically going nuclear with resource utilization. A configuration issue, perhaps?
The problem is that an update will inherently involve a restart of everything, which tends to solve the problem anyway. Whether the update fixed things or restarting things temporarily did is only something you can find out in a few days.
I’ll save this to look at later, but I did use PGTune to set my total RAM allocation for PostgreSQL to be 1.5GB instead of 2. I thought this solved the problem initially, but the problem is back and my config is still at 1.5GB (set in MB to something like 1536 MB, to avoid confusion).
This issue occured a few weeks ago as well, even when we had very little traffic. We still have peanuts when compared with other instances.
Oh, and for completeness:
We’ve deleted the vast majority of the spam bots that spammed our instance, are currently on closed registration with applications, and have had no anomalous activity since.
Our server is essentially always at 50% memory (1GB/2GB), 10% CPU (2 vCPUs), and 30% disk (15-20GB/60GB) until a spike. Disk utilization does not change during a spike.
Our instance is relatively quiet, and we probably have no more than ten truly active users at this point. We have a potential uptick in membership, but this is still relatively slow and negligible.
This issue has happened before, but I assumed it was fixed when I changed the PostgreSQL configuration to utilize less RAM. This is still the longest lead-up time before the spikes started.
When the spike resolves itself, the instance works as expected. The issues with service interruptions seems to stem from a drastic increase in resource utilization, which could be caused by some software component that I’m not aware of. I used the Ansible install for Lemmy, and have only modified certain configuration files as required. For the most part, I’ve only added a higher max_client_body_size in the nginx configs for larger images, and have added settings for an SMTP relay to the main config.hjson file. The spikes occured before these changes, which leads me to believe that they are caused by something I have not yet explored.
These issues occured on both 0.17.4 and 0.18.0, which seems to indicate it’s not a new issue stemming from a recent source code change.
The Ansible install does make things a lot more simple, but it’s still pretty involved if you’re new to self-hosting in general. For example, you might need to set up an SMTP relay if you can’t port forward a workable port, and you also will probably want to change your Nginx configs to allow uploading larger images than a single megabyte.
Lemmy is pretty fun to host. Doubly so if you host a private instance with low latency; you’d basically be defederation proof.
I have a Twitter account, but I haven’t even signed into it or looked at Twitter in a month or more. It’s just a shittier version of Lemmy or Reddit where people need to actually bother getting followers to be heard, and they can only say things with little to no context. Not really a huge surprise that it’s the Waffle House of (not so) intelligent internet debates.
Yeah, and if you do charge separate for shipping, selling fees still apply to the amount charged for shipping.
Separate question, but can’t you follow Mastodon profiles on Lemmy?
Typically, automating or paying someone to manually push out updates to as many channels as possible is the most advantageous option. Realistically, having a website, an associated RSS feed option, Twitter, something like a Mastodon account somewhere, and a text update option would probably cover most of the bases.
I had no idea FOSS tax software was a thing. Huh. I’ll try and play around with it at some point and let you know.