• 0 Posts
  • 39 Comments
Joined 4 years ago
cake
Cake day: February 15th, 2021

help-circle


  • Ferk@lemmy.mltoLinux@lemmy.mlHyprland is now fully independent!
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Which is why you should only care about the personal opinion of those people when it actually relates to that reliability.

    I don’t care whether Linus Torvalds likes disrespecting whichever company or people he might want to give the middle finger to, or throw rants in the mailing list or mastodon to attack any particular individual, so long as he continues doing a good job maintaining the kernel and accepting contributions from those same people when they provide quality code, regardless of whatever feelings he might have about whatever opinions they might hold.

    You rely on the performance of the software, the clarity of the docs, the efficiency of their bug tracking… but the opinions of the people running those things don’t matter so long as they keep being reliable.


  • Ferk@lemmy.mltoLinux@lemmy.mlHyprland is now fully independent!
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    2 months ago

    I have contributed to other projects without really needing to get involved in their community in any personal/parasocial level, though.

    I just make a pull request and when the code was good it was accepted, when not it got rejected. Sometimes I’ve had to make changes before it getting merged, but I had no need to engage in discussions on discord or anything like that. I’ve been in some mailing lists to keep track on some projects, but never really engaged deeply, specially if it goes off-topic.

    If I find that a good code contribution is rejected for whatever toxic reason, then the consequence of that is the code would stop being as good as it could have (because of the contributions being rejected/slowed down), so it’s then that forking might be in order. Of course the code matters.


  • Ferk@lemmy.mltoLinux@lemmy.mlHyprland is now fully independent!
    link
    fedilink
    arrow-up
    11
    arrow-down
    2
    ·
    edit-2
    2 months ago

    To his point: if not “discuss”, what is the correct approach against fascism? war and murder? dismiss it, try to “cancel it” without giving any arguments so it can continue to fester on its own and keep growing in opposition?

    To me, fascism is a stupid position that doesn’t make much sense, to the point that it falls on itself the moment you “discuss” it.

    I would have expected that it would be the fascists the ones unable/unwilling to discuss their position, since it’s the least rational one. So it’s certainly very jarring whenever I hear people jumping to defend against fascism while at the same time stopping in their tracks when it comes to discussing it. Even if those unable to reason might not be convinced by our arguments, anyone with reason would. Rejecting discussion does a disservice, because it does put off those willing to listen and strengthens those who didn’t really want an argument anyway.

    Like flat-earthers, they should be challenged with reason, with discussion. Not dismissed as if it were true that there’s a huge conspiracy against them. Whether they listen or not to that reason, dehumanizing them and rejecting civil and rational discourse would play in favor of their movement.

    Stating “genocide is bad” should NOT be a statement of faith. Faith is the shakiest of the grounds, if we are unable to articulate the specific reasons that make genocide be bad, then we are condemned to see it repeat itself. So, I’d argue it’s for the sake of the victims in Auschwitz that antifascism should not be turned into a religion, but into a solid and rational position that’s not distorted nor used willy-nilly.


  • To each their own. For me, a good lore and dialog is what makes a good RPG stand out.

    If I want action and reflexes, I’d go play an action game. If I want strategy, I’d go for a puzzle game, or a 4X, deckbuilder, etc. But in a proper RPG what I look for is good lore, engaging story and some level of freedom that makes me feel I’m having an impact in that world. If AI can help with immersion and/or dynamic changes, I’m all for it. Of course, for that to happen they need to make sure it does stay in character and does not hallucinate something incoherent.

    If there’s an AI chatbox that actually can stay coherent and be set up as a game without feeling like you have to input too many instructions to the AI to push the narrative (I think AI Dungeon gets close) then well, you could almost consider that being an RPG already. After all, the first RPGs were all text based. So I would already consider that the first iteration of AI-based RPG game. But translating that to a live 3D environment would be the next step.



  • The article talks about how they are ok with using AI for things outside generating images, texts and so. For example, they are fine using the rudimentary AI of any typical enemy in one of their games. So I expect procedural generation that does not rely on trained bayesian network models is ok for them.

    It looks like they just seem to be concerned about the legality of it… so they might just start using it as soon as the legal situation for AI models is made safe.


  • Bash. By default it might seem less featureful than zsh… but bash is a lot more powerful and extensible than some give it credit for. It might be more complex to set it up the way you like it, but once you do it, that configuration can be ported over wherever bash exists (ie. almost everywhere).




  • It can be formatted “nicely” with no issue. But that doesn’t necessarily make it easy to understand.

    What that person posted was in a function named smb() that only gets called by rmb() under certain conditions, and rmb() gets called by AdB() under other conditions after being called from eeB() used in BaP()… it’s a long list of hard to read minified functions and variables in a mess of chained calls, declared in an order that doesn’t necessarily match up with what you’d expect would be the flow.

    In the same file you can also easily find references to the user agent being read at multiple points, sometimes storing it in variables with equally esoteric short names that might sneak past the reader if they aren’t pedantic enough.

    Like, for example, there’s this function:

    function vc() {
        var a = za.navigator;
        return a && (a = a.userAgent) ? a : ""
    }
    

    Searching for vc() gives you 56 instances in that file, often compared to some strings to check what browser the user is using. And that’s just one of the methods where the userAgent is obtained, there’s also a yc=Yba?Yba.userAgentData||null:null; later on too… and several direct uses of both userAgent and userAgentData.

    And I’m not saying that the particular instance that was pointed out was the cause of the problem… it’s entirely possible that the issue is somewhere else… but my point is that you cannot point to a snippet of “nicely formated” messed up transpiler output without really understanding fully when does it get called and expect to draw accurate conclusions from it.


  • It doesn’t really matter whether it was “targeted” at Firefox specifically or not, what matters is whether the website has logic that discriminates against Firefox users. Those are 2 different things. “End” vs “means”.

    I wouldn’t be surprised if the logic was written by some AI, without specifically targeting any browser, and from the training data the AI concluded that there’s a high enough chance of adblocking to deserve handicapping the UX when the browser happens to be Firefox’s. Given that all it’s doing is slowing the website down (instead of straight out blocking them) it might be that this is just a lower level of protection they added for cases where there’s some indicators even if there’s not a 100% confidence an adblock is used.


  • That’s out of context. That snippet of code existing is not sufficient to understand when does that part of the code gets actually executed, right?

    For all we know, that might have been taken from a piece of logic like this that adds the delay only for specific cases:

    if ( complex_obfuscated_logic_to_discriminate_users ) {
    
        setTimeout(function() {
            c();
            a.resolve(1)
        }, 5E3);
    
    } else {
    
        c();
        a.resolve(1)
    
    }
    

    It’s possible that complex_obfuscated_logic_to_discriminate_users has some logic that changes based on user agent.

    And I expect it’s likely more complex than just one if-else. I haven’t had the time to check it myself, but there’s probably a mess of extremely hard to read obfuscated code as result of some compilation steps purposefully designed to make it very hard to properly understand when are some paths actually being executed, as a way to make tampering more difficult.


  • Ferk@lemmy.mltoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    10 months ago

    It’s changing by having a library like wlroots do most of the work.

    When you consider the overall picture, “wlroots + compositor” is actually less complex than “X11 + window manager” because you no longer need to consider the insanely high requirements of having to have a team maintaining the spaghetti mess of X11 code.

    Wayland-based dwl has roughly the same line count as X11-based dwm (about 2.2k), without having to depend on a whole separate service as big as X11.

    But of course, it being a completely different approach, it’s likely that for most smaller projects (ie. not Gnome or KDE) it’s easier to start a new project than creating a layer to maintain two different parallel implementations.

    If you want something that’s more or less compatible with openbox, there seems to be this project, labwc, which claims to be inspired by openbox and compatible with its config/themes… though I haven’t personally tried it.

    Also keep in mind that openbox (and I expect labwc too) doesn’t include any “panels” / “taskbars” or anything like that… and it’s likely your X11 panels might not work well if they do not explicitly support Wayland (but I believe that, for example, xfce-panel now supports both).





  • I think part of the reason why the long extension is often preferred is because it’s much clearer and it’s guaranteed to be supported and decompressed by the respective tools. Even when they don’t suppot tar archives, they’ll just give you the uncompressed tar in that case.

    It’s also very common to do that with other extensions (not just .tar) when compressing big files. For example, when archiving logs they’ll often be stored as .log.gz, which makes it automatically clear that it’s a log file directly compressed with gzip and meant to be examined with tools like zcat and zless to view it.

    And in cases like that you really need it to be clear on what data does the gzip stores, since it does not keep metadata about the file so you might not be able to get back the original name/extension of the file if you rename the gz file.