If you’re having to type out version numbers in your commands, something is broken.
I ended up having to roll my own shell script wrapper to bring some sanity to Python.
If you’re having to type out version numbers in your commands, something is broken.
I ended up having to roll my own shell script wrapper to bring some sanity to Python.
Yes yes, I know all that. The fact remains that a permanent IP associated with an individual is personally identifying information. Even the variety in browser requests counts as such according to the GDPR, and that is usually pooled with lots of other users. This is clearly a level above that. It’s why, for example, I would not use the VPS for proxy web browsing: zero privacy.
What’s the downside you see from having a static IP address?
What’s the downside to having one’s phone number in the public directory? There’s no security risk and yet plenty of people opt out. It’s personally identifying information.
I don’t know if any companies provide reverse proxies without a CDN though.
Exactly.
You still need encryption between your CDN and your origin, ideally using a proper certificate.
It can be self-signed though, that’s what I’m doing and it’s partly to outsource the TLS maintenance. But the main reason I’m doing it is to get IP privacy. WHOIS domain privacy is fine, but to me it seems pretty sub-optimal for a personal site to be publicly associated with even a permanent IP address. A VPS is meant to be private, it’s in the name. This is something that doesn’t get talked about much. I don’t see any way to achieve this without a CDN, unfortunately.
I guess it’s popular because people already use Github and don’t want to look for other services?
Yes, and the general confusion between Git and Github, and between public things and private things. It’s everywhere today. Another example: saying “my Substack” as if blogging was just invented by this private company. So it’s worse than just laziness IMO. It’s a reflexive trusting of the private over the public.
I have some static sites that I just rsync to my VPS and serve using Nginx. That’s definitely a good option.
Agree. And hard to get security wrong cos no database.
If you want to make it faster by using a CDN and don’t want it to be too hard to set up, you’re going to have to use a CDN service.
Yes but this can just be a drop-in frontend for the VPS. Point the domain to Cloudflare and tell only Cloudflare where to find the site. This provides IP privacy and also TLS without having to deal with LetsEncrypt. It’s not ideal because… Cloudflare… but at least you’re using standard web tools. To ditch Cloudflare you just unplug them at the domain and you still have a website.
Perhaps its irrational but I’m bothered by how many people seem to think that Github Pages is the only way to host a static website. I know that’s not your case.
This is a bit fuzzy. You seem to recommend a VPS but then suggest a bunch of page-hosting platforms.
If someone is using a static site generator, then they’re already running a web server, even if it’s on localhost. The friction of moving the webserver to the VPS is basically zero, and that way they’re not worsening the web’s corporate centralization problem.
I host my sites on a VPS. Better internet connection and uptime, and you can get pretty good VPSes for less than $40/year.
I preferred this advice.
Can recommend Hetzner (German IP). Good value and so far solid.
Before that I used OVH (French IP) for years but it ended badly. First they locked me out of my account for violating 2FA which I had not asked for or been told about, and would not provide any recourse except sending them a literal signed paper letter, which I had to do twice because the first one they ignored. A nightmare which went on for weeks. And then, cherry on the cake, my VPS literally went up in smoke when their Strasbourg data center burned down! Oops! Looks like your VPS is gone, sorry about that, here’s a voucher for six months free hosting! Months later they discovered a backup but the damage was done. Never again.
i dont want to learn 400 obscure keystrokes among other nonsense. we dont need to hear about your text editing stockholm-syndrome.
This reads like projected insecurity. Or maybe even… jealousy.
always find myself needing to fire up a window manager just to get a browser eventually
A chromeless tiling WM is basically invisible and AFAIK has almost zero performance impact. That’s roughly what I do.
Launching it using the raw framebuffer means it blocks the screen until you close it, and there’s no means to do anything else except switching to another TTY, is that it?
Fair point about raw speed. I never found the keyboard-vs-mouse speed debate very interesting either.
But cognitive load is a double-edged sword. Sure, the first time you attempt a task, the abstraction of a GUI is really helpful. There’s nothing to remember, you just point and click around and eventually the task is done. But when you have a task with 7 steps which you have to do every 2 weeks, then the GUI becomes a PITA in my experience. GUIs are all but impossible to script, and so you’re gonna need a good memory if you want to get it done quickly and accurately. This is where CLI scripting becomes genuinely useful. Personally I have quite a few such tasks.
Great! I guessed that going full framebuffer would be trickier than that. You’ve laid me down a new challenge.
How much can you actually do without a windowing environment? […] Opening images in fbi, PDFs in fbpdf, listening to music in cmus, watching movies in mplayer
Maybe not an “environment” but it sounds like you’re at least using a window manager. The PDFs and videos, not to mention web browser, are gonna be hard to pull off from a raw shell. [Hard but not that hard, apparently!]
But that’s a detail. Otherwise I share your enthusiasm, I’ve been doing things this way for a while. Basically: tiling window manager + TUI file manager + scripts which do precisely what I want, if possible in the terminal, if necessary by launching a GUI app. In practice the GUI apps are Firefox, mapping app, and messaging apps.
The general discovery I made was this: for the small price of foregoing pretty colors and buttons and chrome, you can get a computer to do exactly what you want it to do much quicker. Assuming a willingness to learn a bit of shell scripting, of course.
For example: I have a button which runs a script with getmail
that pulls in my email and then deploys ripmime
and weasyprint
to convert it to datestamped PDF files, which it dumps with any attachments directly into an inbox folder. In other words, I have made ranger
into my email client and I never need to “download” anything, it’s already there.
And those PDFs I can then manipulate with a bunch of shell scripts that use standard utilities, i.e. to split them, merge them, shrink them, clean them of metadata, even make them look like they come from photocopied paper (dumb bank!). All the stupid shit I once did with 10 manipulations hunting thru menus with a pointer in a fiddly app and always forgetting how it was done. Now I just select the file in the terminal, hit a button and it’s done, I don’t even see the PDF.
Of course, it’s not for everyone, but this is the promise of free computing.
While I appreciate the amount of development those companies bring to the table, the moment they’re in control of the project they’ll try to find ways to profit from it at the expense of the community, and it almost always results in a poorer product.
Yes, hard to argue with this. Or indeed anything else you just said. I agree that for any project it’s crucial that there be a wide variety of stakeholders.
Believe it or not, I’m being gradually won over by the arguments deployed in this discussion! Incredible but true.
Yes yes, these are good points. To be clear, IMO Debian is the ideal Ubuntu replacement. They have the pedigree, the credible claim to be the Universal OS. But have you seen Debian’s website? No way. Hopefully that will change one day.
Fair enough, and perhaps you’re right. Personally I’m reassured when a for-profit company backstops an open-source project. So many amateur projects turn into abandonware, an OS has to do better than that. But yes, Canonical could get into trouble too.
Personally I see not Mint but Debian as the best claimant to Ubuntu’s mantle. I just wish they would become a bit less amateurish. Maybe move towards the Wikimedia foundation model, get some serious resources, a better website and onboarding funnel, etc. Their ideological position is great, but if you want to change the world then at some point you need to behave at least somewhat like a private business.
Fair points. Admittedly I use a tiling window manager so I never see most of these problems.
My basic concern is with fragmentation. IMO many techies just don’t grasp how forbidding Linux is to normal people. Or the importance of reputation in people’s choice to take the leap. It’s all but priceless. Ubuntu-bashing has always struck me as a case of an elite group that prefers to split hairs rather than to take the win of getting extra users of FOSS. Idealism vs pragmatism, basically.
Anyway, I’m repeating myself. If you think that normies have heard of Mint already and that it won’t go away next year, then fine. The important thing is to get them to take the leap. They can always change distro later, the second time is much less forbidding.
In my opinion Ubuntu-bashing is unjustified and counterproductive.
Unjustified because Ubuntu is great! I say that having used it exclusively for years without a problem. That has to be worth something. Yes, there’s the Snap issue, and occasional shenanigans from Canonical, but so far these problems are not existential. For context I’ve been on Linux for 2 decades (also Debian) but I am not a typical techie (history major). Ubuntu just works.
Counterproductive because Linux needs a flagship distro for beginners. Just the word Linux is daunting to most normies! We absolutely need a beginner distro with name recognition. Well, this may hurt to hear but Ubuntu is basically the only candidate. Name recognition does not come cheap. At this point it is decades of work and we should not be squandering it.
The issue is more general. When dealing with, say,
apt
, my experience is that nothing ever breaks and any false move is immediately recoverable. When dealing with Python, even seemingly trivial tasks inevitably turn into a broken mess of cryptic error messages and missing dependencies which requires hours of research to resolve. It’s a general complaint. The architecture seems fragile in some way. Of course, it’s possible it’s just because I am dumb and ignorant.