

I use scheduled Google Tasks and let my disdain of notifications drive task completion


I use scheduled Google Tasks and let my disdain of notifications drive task completion
It’s mostly a skill issue for services that go down when USE-1 has issues in AWS - if you actually know your shit, then you don’t get these kinds of issues.
Case in point: Netflix runs on AWS and experienced no issues during this thing.
And yes, it’s scary that so many high-profile companies are this bad at the thing they spend all day doing


Pick-Up Group


Well, you might be inclined to not roll the feature out at all, depending on the results you see from the rollout/an A/B-test. Also, having it written out with a date in the changelog binds you to that date, unless you want the embarrassment of not shipping on a promised time. Maintaining a changelog for very large app development organizations is also a pretty damn hard task, trying to coordinate whatever all teams are releasing in a particular build.
I agree that getting cute with the changelog messages is a bit stale. Might as well not add anything at that point.


Modern mobile app development almost always releases features gradually behind feature flags, so changelogging things is not necessarily practical to do.
One Rich Asshole Called Larry Ellison


That’s right. The art of correctly handling savedInstanceState is unfortunately not exactly well understood


Your phone is trying to keep your battery alive. The lower the specs of the phone, then the more aggressive the OS is.
No, apps closing between switches is not a matter of battery, it’s a core feature of Android related to the management of RAM. Whenever the OS needs more available RAM, the OS will close a backgrounded app to make those resources available. This is why it happens more frequently on low-end devices - these generally ship with less RAM.
Some misguided vendors will limit background execution in incorrect ways in the name of saving battery, but the general thing with apps living in background is a story of RAM.
Interestingly enough, apps are supposed to be built to cope with being closed down due to lack of RAM and then be restored seamlessly, but this is an art that is uncommonly done correctly in the Android development space. The OS support is there, though.


That’s just a bit sloppy on their part in that case - you fix bugs on all applicable variants of the flag, otherwise you even kind of negate the Scientific validity of your results


At the scale that Google operates, you need to play it carefully with rollouts.


I think this has been happening to me too, come to think of it.


The reason you roll things out slowly is that you want to make sure nothing’s getting fucked up, in part software malfunction and in part usage metrics.


I can recommend switching to reading books. An e-reader with mild backlight is ideal for this use-case as you can keep the room pitch black and still be able to read. My time to fall asleep and rate of early wakeups has plummeted since I made the switch


Dudes trying to assassinate them with cringe
To clarify - Sweden has a lesser variant of the U.S credit score system, but it differs in some important ways. For example, you don’t have to get a credit card to ‘build credit’ - you are assumed to be in good standing unless you have unusual ratios of debt and so on.
Sweden does not have a social credit score system.


This is true. They do not think, because they are next token predictors, not brains.
Having this in mind, you can still harness a few usable properties from them. Nothing like the kind of hype the techbros and VCs imagine, but a few moderately beneficial use-cases exist.


LLMs are fundamentally unsuitable for character counting on account of how they ‘see’ the world - as a sequence of tokens, which can split words in non-intuitive ways.
Regular programs already excel at counting characters in words, and LLMs can be used to generate such programs with ease.
I have zero desire to live like that to be completely honest with you
This happens because LLMs operate on tokenized data, and they don’t really “see” the text in the form of individual characters.
You can quite reasonably get an LLM to generate a script that does the character counting and then run the script, to arrive at the correct answer.
Indeed, wild statement if I ever heard one