I thought I’ll make this thread for all of you out there who have questions but are afraid to ask them. This is your chance!

I’ll try my best to answer any questions here, but I hope others in the community will contribute too!

  • SineIraEtStudio@midwest.social
    link
    fedilink
    arrow-up
    32
    ·
    5 months ago

    Mods, perhaps a weekly post like this would be beneficial? Lowering the bar to entry with some available support and helping to keep converts.

      • Arthur Besse@lemmy.mlM
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 months ago

        Ok, I just stickied this post here, but I am not going to manage making a new one each week :)

        I am an admin at lemmy.ml and was actually only added as a mod to this community so that my deletions would federate (because there was a bug where non-mod admin deletions weren’t federating a while ago). The other mods here are mostly inactive and most of the mod activity is by me and other admins.

        Skimming your history here, you seem alright; would you like to be a mod of /c/linux@lemmy.ml ?

        • Cyclohexane@lemmy.mlOPM
          link
          fedilink
          arrow-up
          7
          ·
          5 months ago

          Please feel free to make me a mod too. I am not crazy active, but I think my modest contributions will help.

          And I can make this kind of post on a biweekly or monthly basis :) I think weekly might be too often since the post frequency here isn’t crazy high

        • d3Xt3r@lemmy.nzM
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          5 months ago

          Thanks! Yep I mentioned you directly seeing as all the other other mods here are inactive. I’m on c/linux practically every day, so happy to manage the weekly stickies and help out with the moderation. :)

    • Julian@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 months ago

      Someone already gave an answer, but the reason it’s done that way is because on Linux, generally programs don’t install themselves - a package manager installs them. Windows (outside of the windows store) just trusts programs to install themselves, and include their own uninstaller.

    • NaN@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      5 months ago

      Because Linux and the programs themselves expect specific files to be placed in specific places, rather than bunch of files in a single program directory like you have in Windows or (hidden) MacOS.

      If you compile programs yourself you can choose to put things in different places. Some software is also built to be more self contained, like the Linux binaries of Firefox.

      • krash@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        5 months ago

        Actually, windows puts 95% of it files in a single directory, and sometimes you get a surprise DLL in your \system[32] folder.

    • shadowintheday2@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      you install program A, it needs and installs libpotato then later you install program B that depends on libfries, and libfries depends on libpotato, however since you already have libpotato installed, only program B and libfries are installed The intelligence behind this is called a package manager

      In windows when you install something, it usually installs itself as a standalone thing and complains/reaks when dependencies are not met - e.g having to install Visual C++ 2005-202x for games, JRE for java programs etc

      instead of making you install everything that you need to run something complex, the package manager does this for you and keep tracks of where files are

      and each package manager/distribution has an idea of where some files be stored

    • Max-P@lemmy.max-p.me
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      Expanding on the other explanations. On Windows, it’s fairly common for applications to come with a copy of everything they use in the form of DLL files, and you end up with many copies of various versions of those.

      On Linux, the package manager manages all of that. So if say, an app needs GTK, then the package manager makes sure GTK is also installed. And since your distribution’s package manager manages everything and mostly all from source code, you get a version of the app specifically compiled for that version of GTK the distribution provides.

      So if we were to do it kind of the Windows way, it would very, very quickly become a mess because it’s not just one big self contained package you drop in C:\Program Files. Linux follows the FSH which roughly defines where things should be. Binaries go to /usr/bin, libraries to /usr/lib, shared files go to /usr/shared. A bunch of those locations are somewhat special, for example .desktop files in /usr/share/applications show up in the menu to launch them. That said Linux does have a location for big standalone packages: that’s usually /opt.

      There’s advantages and inconveniences with both methods. The Linux way has the advantage of being able to update libraries for all apps at once, and reduce clutter and things are generally more organized. You can guess where an icon file will be located most of the time because they all go to the same place, usually with a naming convention as well.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 months ago

      Because dependencies. You also should not be installing things you download of the internet nor should you use install scripts.

      The way you install software is your distros package manager or flatpak

    • penquin@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      I wish every single app installed in the same directory. Would make life so much easier.

        • penquin@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          Not all. I’ve had apps install in opt, flatpaks install in var out of all places. Some apps install in /etc/share/applications

          • teawrecks@sopuli.xyz
            link
            fedilink
            arrow-up
            0
            ·
            5 months ago

            In /etc? Are you sure? /usr/share/applications has your system-wide .desktop files, (while .local/share/applications has user-level ones, kinda analogous to installing a program to AppData on Windows). And .desktop files could be interpreted at a high level as an “app”, even though they’re really just a simple description of how to advertise and launch an application from a GUI of some kind.

            • penquin@lemm.ee
              link
              fedilink
              arrow-up
              0
              ·
              5 months ago

              OK, that was wrong. I meant usr/share/applications. Still, more than one place.

              • teawrecks@sopuli.xyz
                link
                fedilink
                arrow-up
                1
                ·
                5 months ago

                The actual executables shouldn’t ever go in that folder though.

                Typically packages installed through a package manager stick everything in their own folder in /usr/lib (for libs) and /usr/share (for any other data). Then they either put their executables directly in /usr/bin or symlink over to them.

                That last part is usually what results in things not living in a consistent place. A package might have something that qualifies as both an executable and a lib, so they store it in their lib folder, but symlink to it from bin. Or they might not have a lib folder, and just put everything in their share folder and symlink to it from bin.

        • Ramin Honary@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          5 months ago

          They do! /bin has the executables, and /usr/share has everything else.

          Apps and executables are similar but separate things. An app is concept used in GUI desktop environments. They are a user-friendly front end to one or more executable in /usr/bin that is presented by the desktop environment (or app launcher) as a single thing. On Linux these apps are usually defined in a .desktop file. The apps installed by the Linux distribution’s package manager are typically in /usr/share/applications, and each one points to one of the executables in /usr/bin or /usr/libexec. You could even have two different “apps” launch a single executable, but each one using different CLI arguments to give the appearance of different apps.

          The desktop environment you use might be reconfigured to display apps from multiple sources. You might also install apps from FlatHub, Lutris, Nix, Guix, or any of several other package managers. This is analogous to how in the CLI you need to set the “PATH” environment variable. If everything is configured properly (and that is not always the case), your desktop environment will show apps from all of these sources collected in the app launcher. Sometimes you have the same app installed by multiple sources, and you might wonder “why does Gnome shell show me OpenTTD twice?”

          For end users who install apps from multiple other sources besides the default app store, there is no easy solution, no one agreed-upon algorithm to keep things easy. Windows, Mac OS, and Android all have the same problem. But I have always felt that Linux (especially Guix OS) has the best solution, which is automated package management.

  • I Cast Fist@programming.dev
    link
    fedilink
    arrow-up
    10
    ·
    5 months ago

    Why does it feel that Linux infighting is the main reason why it never takes off? It’s always “distro X sucks”, “installing from Y is stupid”, “any system running Z should burn”

    • johannesvanderwhales@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      5 months ago

      Linux generally has a higher (perceived?) technical barrier to entry so people who opt to go that route often have strong opinions on exactly what they want from it. Not to mention that technical discussions in general are often centered around decided what the “right” way to do a thing is. That said regardless of how the opinions are stated, options aren’t a bad thing.

      • wolf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 months ago

        This.

        It is a ‘built-in’ social problem: Only people who care enough to switch to Linux do it, and this people are pre-selected to have strong opinions.

        Exactly the same can be observed in all kind of alternative projects, for example alternative housing projects usually die because of infighting for everyone has their own definition of how it should work.

    • bloodfart@lemmy.ml
      link
      fedilink
      arrow-up
      7
      ·
      5 months ago

      Because you don’t have an in person user group and only interact online where the same person calling all mandrake users fetal alcohol syndrome babies doesn’t turn around and help those exact people figure out their smb.conf or trade sopranos episodes with them at the lan party.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      5 months ago

      Linux users are often very passionate about the software they put on their computers, so they tend to argue about it. I think the customization and choices scares off a lot of beginners, I think the main reason is lack of compatibility with Windows software out of the box. People generally want to use software they are used to.

    • msch@feddit.de
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      It did take off, just not so much on the Desktop. I think those infights are really just opinions and part of further development. Having choices might be a great part of the overall success.

      • I Cast Fist@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        just not so much on the Desktop

        Unix already had a significant presence in server computers during the late 80s, migrating to Linux wasn’t a big jump. Besides, the price of zero is a lot more attractive when the alternative option costs several thousand dollars

        • MonkeMischief@lemmy.today
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          the price of zero is a lot more attractive when the alternative option costs several thousand dollars

          Dang, I WISH. Places that constantly beg for donations like public libraries and schools will have Windows-everything infrastructure “because market share”. (This is what I was told when I was interviewing for a library IT position)

          They might have gotten “lucky” with a grant at some point, but having a bank of 30+ computers for test-taking that do nothing but run MS Access is a frivilous budget waste, and basically building your house on sand when those resources could go to, I dunno… paying teachers, maybe?

          • Trainguyrom@reddthat.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 months ago

            Licensing is weird especially in schools. It may very well be practically free for them to license. Or for very small numbers of computers they might be able to come out ahead by only needing to hire tech staff that are competent with Windows compared to the cost of staff competent with Linux. Put another way, in my IT degree program every single person in my graduating class was very competent as a Windows admin, but only a handful of us were any good with Linux (with a couple actively avoiding Linux for being different)

    • Cyclohexane@lemmy.mlOPM
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      5 months ago

      Doesn’t feel like that to me. I’ll need to see evidence that that is the main reason. It could be but I just don’t see it.

      • I Cast Fist@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        I mean, Wayland is still a hot topic, as are snaps and flatpaks. Years ago it was how the GTK2 to GTK3 upgrade messed up Gnome (not unlike the python 2 to 3 upgrade), some hardcore people still want to fight against systemd. Maybe it’s just “the loud detractors”, dunno

        • Cyclohexane@lemmy.mlOPM
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          Why would one be discouraged by the fact that people have options and opinions on them? That’s the part I’m not buying. I don’t disagree that people do in fact disagree and argue. I don’t know if I’d call it fighting. People being unreasonably aggressive about it are rare.

          I for one am glad that people argue. It helps me explore different options without going through the effort of trying every single one myself.

          • billgamesh@lemmy.ml
            link
            fedilink
            arrow-up
            0
            ·
            5 months ago

            I’m using wayland right now, but still use X11 sometimes. I love the discussion and different viewpoints. They are different protocols, with different strengths and weaknesses. People talking about it js a vitrue in my opinion

            • ObliviousEnlightenment@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              5 months ago

              I can only use x11 myself. The drivers for Wayland on nvidia aren’t ready for prime time yet, my browser flickers and some games don’t render properly. I’m frankly surprised the KDE folks shipped it out

            • Captain Aggravated@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 months ago

              Being I’m on Mint Cinnamon and using an Nvidia card, I’ve never even tried to run Wayland on this machine. Seems to work okay on the little Lenovo I put Fedora GNOME on. X11 is still working remarkably well for me, and I’m looking forward to the new features in Wayland once the last few kinks are worked out with it.

            • MonkeMischief@lemmy.today
              link
              fedilink
              arrow-up
              0
              ·
              5 months ago

              I like the fact that I can exercise my difficulty with usage commitment by installing both and switching between them :D.

              Wayland is so buttery smooth it feels like I just upgraded my computer for free…but I still get some window Z-fighting and screen recording problems and other weirdness.

              I’m glad X11 is still there to fall back on, even if it really feels janky from an experience point of view now.

              • billgamesh@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                5 months ago

                For me, it’s building software from source on musl. Just one more variable to contend with

  • vort3@lemmy.ml
    link
    fedilink
    arrow-up
    9
    ·
    5 months ago

    How do symlinks work from the point of view of software?

    Imagine I have a file in my downloads folder called movie.mp4, and I have a symlink to it in my home folder.

    Whenever I open the symlink, does the software (player) understand «oh this file seems like a symlink, I should go and open the original file», or it’s a filesystem level stuff and software (player) basically has no idea if a file I’m opening is a symlink or the original movie.mp4?

    Can I use sync software (like Dropbox, Gdrive or whatever) to sync symlinks? Can I use sync software to sync actual files, but only have symlinks in my sync folder?

    Is there a rule of thumb to predict how software behaves when dealing with symlinks?

    I just don’t grok symbolic links.

    • Cyclohexane@lemmy.mlOPM
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      5 months ago

      A symlink works more closely to the first way you described it. The software opening a symlink has to actually follow it. It’s possible for a software to not follow the symlink (either intentionally or not).

      So your sync software has to actually be able to follow symlinks. I’m not familiar with how gdrive and similar solutions work, but I know this is possible with something like rsync

      • teawrecks@sopuli.xyz
        link
        fedilink
        arrow-up
        6
        ·
        5 months ago

        An application can know that a file represents a soft link, but they don’t need to do anything differently to follow it. If the program just opens it, reads it, writes to it, etc, as though it were the original file, it will just worktm without them needing to do anything differently.

        It is possible for the software to not follow a soft symlink intentionally, yes (if they don’t follow it unintentionally, that might be a bug).

        As for hard links, I’m not as certain, but I think these need to be supported at the filesystem level (which is why they often have specific restrictions), and the application can’t tell the difference.

      • vort3@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        So I guess it’s something like pressing ctrl+c: most software doesn’t specifically handle this hotkey so in general it will interrupt a running process, but software can choose to handle it differently (like in vim ctrl+C does not interrupt it).

        Thanks.

        Fun fact: pressing X (close button) on a window does not make it that your app is closed, it just sends a signal that you wish to close it, your app can choose what to do with this signal.

    • 0xtero@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      5 months ago

      A symlink is a file that contains a shortcut (text string that is automatically interpreted and followed by the operating system) reference to another file or directory in the system. It’s more or less like Windows shortcut.

      If a symlink is deleted, its target remains unaffected. If the target is deleted, symlink still continues to point to non-existing file/directory. Symlinks can point to files or directories regardless of volume/partition (hardlinks can’t).

      Different programs treat symlinks differently. Majority of software just treats them transparently and acts like they’re operating on a “real” file or directory. Sometimes this has unexpected results when they try to determine what the previous or current directory is.

      There’s also software that needs to be “symlink aware” (like shells) and identify and manipulate them directly.

      You can upload a symlink to Dropbox/Gdrive etc and it’ll appear as a normal file (probably just very small filesize), but it loses the ability to act like a shortcut, this is sometimes annoying if you use a cloud service for backups as it can create filename conflicts and you need to make sure it’s preserved as “symlink” when restored. Most backup software is “symlink aware”.

    • bloodfart@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      5 months ago

      its a pointer.

      E: Okay so someone downvoted “it’s a pointer”. Here goes. both hard links and symbolic links are pointers.

      The hard link is a pointer to a spot on the block device, whereas the symbolic link is a pointer to the location in the filesystems list of shit.

      That location in the filesystems list of shit is also a pointer.

      So like if you have /var/2girls1cup.mov, and you click it, the os looks in the file system and sees that /var/2girls1cup.mov means 0x123456EF and it looks there to start reading data.

      If you make a symlink to /var/2girls1cup.mov in /bin called “ls” then when you type “ls”, the os looks at the file in /bin/ls, sees that it points to /var/2girls1cup.mov, looks in the file system and sees that it’s at 0x123456EF and starts reading data there.

      If you made a hard link in /bin called ls it would be a pointer to the location on the block device, 0x123456EF. You’d type “ls” and the os would look in the file system for /bin/ls, see that /bin/ls means 0x123456EF and start reading data from there.

      Okay but who fucking cares? This is stupid!

      If you made /bin/ls into /var/2girls1cup.mov with a symlink then you could use normal tools to work with it, looking at where it points, it’s attributes etc and like delete just the link or fully follow (dereference) the link and delete all the links in the chain including the last one which is the filesystems pointer to 0x123456EF called /var/2girls1cup.mov in our example.

      If you made /bin/ls into a hardlink to 0x123456EF, then when you did stuff to it the os wouldn’t know it’s also called /var/2girls1cup.mov and when /bin/ls didn’t work as expected you’d have to diff the output of mediainfo on both files to see that it’s the same thing and then look where on the hard drive /var/2girls1cup.mov and /bin/ls point to and compare em to see oh, someone replaced my ls with a shock video using a hard link.

      When you delete the /bin/ls hardlink, the os deletes the entry in the file system pointing to 0x123456EF and you are able to put normal /bin/ls back again. Deleting the hard link wouldn’t actually remove the data that comprises that file off the drive because “deleting” a “file” is just removing the file systems record that there’s something there to be aware of.

      If instead of deleting the /bin/ls hardlink, you opened it up and replaced the video portion of its data with the music video to never gonna give you up, then when someone tried to open /var/2girls1cup.mov they’d instead see that music video.

      if that is, the file wasn’t moved to another place on the block device when you changed it. Never gonna give you up has a much longer running time than 2girls1cup and without significant compression the os is gonna end up putting /bin/ls in a different place in the block device that can accommodate the longer data stream. If the os does that when you get done modifying your 2girls1cup /bin/ls into rickroll then /bin/ls will point to 0x654321EF or something and only you will experience astleys dulcet tones when you use ls, the old 0x123456EF location will still contain the data that /var/2girls1cup.mov is meant to point to and you will have played yourself.

      Okay with all that said: how does the os know what to do when one of its standard utilities encounters a symlink? They have a standard behavior! It’s usually to “follow” (dereference) the link. What the fuck good would a symbolic link be if it didn’t get treated normally? Sometimes though, like with “ls” or “rm” you might want to see more information or just delete the link. In those cases you gotta look at how the software you’re trying to use treats links.

      Or you can just make some directories and files with touch and try what you wanna do and see what happens, that’s what I do.

    • wolf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Symlinks are fully transparent for all software just opening the file etc.

      If the software really cares about this (like file managers) they can simply ask the Linux kernel for additional information, like what type of file it is.

    • Ramin Honary@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 months ago

      Whenever I open the symlink, does the software (player) understand «oh this file seems like a symlink, I should go and open the original file», or it’s a filesystem level stuff and software (player) basically has no idea if a file I’m opening is a symlink or the original movie.mp4?

      Others have answered well already, I just will say that symlinks work at the filesystem level, but the operating system is specially programmed to work with them. When a program asks the operating system to open a file at a given path, the OS will automatically “reference” the link, meaning it will detect a symlink and jump to the place where the symlink is pointing.

      A program may choose to inspect whether a file is a symlink or not. By default, when a program opens a file, it simply allows the operating system to reference the file path for it.

      But some apps that work on directories and files together (like “find”, “tar”, “zip”, or “git”) do need to worry about symlinks, and will check if a path is a symlink before deciding whether to reference it. For example, you can ask the “find” command to list only symlinks without referencing them: find -type l

    • bizdelnick@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      Software opens a symlink the same way as a regular file. The kernel reads a path stored in a symlink and then opens a file with that path (or returns a error if unable to do this for some reason). But if a program needs to perform specific actions on symlinks, it is able to check the file type and resolve symlink path.

      To determine how some specific software handle symlinks, read its documentation. It may have settigs like “follow symlinks” or “don’t follow symlinks”.

  • neidu2@feddit.nl
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    5 months ago

    What’s the difference between /bin and /usr/bin and /usr/local/bin from an architectural point of view? And how does sbin relate to this?

    • bastion@feddit.nl
      link
      fedilink
      arrow-up
      6
      ·
      5 months ago

      There’s a standard. /usr was often a different partition.

      /bin - system binaries
      /sbin - system binaries that need superuser privileges
      /usr/bin - Normal binaries
      /usr/sbin - normal binaries that require superuser privileges
      /usr/local/bin - for executables that aren't 'packaged' - i.e., installed by you or some other program system-wide
      
  • cosmicrookie@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    5 months ago

    In the terminal, why can’t i paste a command that i have copied to the clipboard, with the regular Ctrl+V shortcut? I have to actually use the mouse and right click to then select paste.

    (Using Mint cinnamon)

    • Captain Aggravated@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      5 months ago

      In Terminal land, Ctrl+C has meant Cancel longer than it’s meant copy. Shift + Insert does what you think Ctrl+V will do.

      Also, there’s a separate thing that exists in most window managers called the Primary buffer, which is a separate thing from the clipboard. Try this: Highlight some text in one window, then open a text editor and middle click in it. Ta da! Reminder: This has absolutely nothing to do with the clipboard, if you have Ctrl+X or Ctrl+C’d something, this won’t overwrite that.

    • r0ertel@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      5 months ago

      Old timer here! As many others replying to you indicate, Ctrl+C means SIGINT (interrupt running program). Many have offered the Ctrl+Shift+C, but back in my day, we used Shift+Insert (paste) and Ctrl+Insert (copy). They still work today, but Linux has 2 clipboard buffers and Shift+Insert works against the primary.

      As an aside, on Wayland, you can use wl-paste and wl-copy in your commands, so git clone "$(wl-paste)" will clone whatever repo you copied to your clipboard. I use this one all the time

      • Trainguyrom@reddthat.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 months ago

        so git clone "$(wl-paste)" will clone whatever repo you copied to your clipboard. I use this one all the time

        That’s a lot of confidence in not accidentally grabbing a leading/trailing space and grabbing unformatted text. I never trust that I’ve copied clean text and almost exclusively Ctrl+Shift+V to paste without formatting

    • ArcaneSlime@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      3
      ·
      5 months ago

      Try ctrl+shift+v, iirc in the terminal ctrl+v is used as some other shortcut (and probably has been since before it was standard for “paste” I’d bet).

      Also linux uses two clipboards iirc, the ctrl+c/v and the right click+copy/paste are two distinct clipboards.

    • Cyclohexane@lemmy.mlOPM
      link
      fedilink
      arrow-up
      2
      ·
      5 months ago

      The terminal world has Ctrl+C and Ctrl+(many other characters) already reserved for other things before they ever became standard for copy paste. For for this reason, Ctrl+Shift+(C for copy, V for paste) are used.

    • baseless_discourse@mander.xyz
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      5 months ago

      In most terminal (gnome terminal, blackbox, tilix etc.) you can actually override this behavior by changing keyboard shortcut. Blackbox even have a simple toggle that will enable ctrl+c v copy paste.

      Gnome console is the only terminal I know that doesn’t allow you to change this.

    • Nyanix@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      While I don’t have the answer as to why, it usually works if you just add a shift, ie. SHIFT+CTRL+V Many terminals also allow you to change the shortcut to copy and paste, so you can adjust for comfort’s sake.

    • u_die_for_elmer@lemm.eeB
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      Use shift+control+v to paste. Shift+control+c to copy in the terminal. It’s this way because control+c in the terminal is to break out of the currently running process.

    • Pesopes@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      5 months ago

      Ctrl+V is already a shortcut for something (I don’t even know what) but to paste just add shift so Ctrl+Shift+V.

      (Also a beginner btw)

    • Elsie@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      Ctrl+shift+V is what you should do. Ctrl+V is used by shells for I believe inserting characters without doing some sort of evaluation. I don’t remember the specifics though, but yes Ctrl+shift+V to paste.

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      …because that would make Ctrl+C Cut/Copy and that would be really bad. It would kill whatever was running.

      So, it becomes Ctrl+Shift+C and paste got moved in the same way for consistency.

      • maxxxxpower@lemmy.ca
        link
        fedilink
        arrow-up
        0
        arrow-down
        3
        ·
        5 months ago

        I use Ctrl+C to copy far more often than to break a process or something. I demand that Ctrl+Shift+C be reconfigured! 😀

  • DosDude👾@retrolemmy.com
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    5 months ago

    Is there a way to remove having to enter my password for everything?

    Wake computer from Screensaver? Password.
    Install something? Password.
    Updates (biggest one. Updates should in my opinion just work without, because being up to date is important for security reasons)? Password.

    I understand sudo needs a password,but all the other stuff I just want off. The frequency is rediculous. I don’t ever leave my house with my computer, and I don’t want to enter a password for my wife everytime she wants to use it.

    • lemmyreader@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 months ago

      I understand sudo needs a password

      You can configure sudo to not need a password for certain commands. Unfortunately the syntax and documentation for that is not easily readable. Doas which can be installed and used along side sudo is easier.

      For software updates you can go for unattended-upgrades though if you turn off your computer when it is upgrading software you may have to fix the broken pieces.

      • DosDude👾@retrolemmy.com
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        I’ve tried unattended-upgrades once. And I couldn’t get it to work back then. It might be more user friendly now. Or it could just be me.

        • lemmyreader@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 months ago

          It’s not really user friendly, at least not how I know it. But useful for servers and when desktop computers are on for a long time. It would be a matter of enabling or disabling it with : sudo dpkg-reconfigure unattended-upgrades granted that you have the unattended-upgrades package installed. In that case I’m not sure when the background updates will start, though according to the Debian wiki the time for this can be configured.

          But with Ubuntu a desktop user should be able to configure software updated to be done automatically via a GUI. https://help.ubuntu.com/community/AutomaticSecurityUpdates#Using_GNOME_Update_Manager

    • Nibodhika@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      5 months ago

      I understand sudo needs a password,but all the other stuff I just want off.

      Sudo doesn’t need a password, in fact I have it configured not to on the computers that don’t leave the house. To do this open /etc/sudoers file (or some file inside /etc/sudoers.d/) and add a line like:

      nibodhika ALL=(ALL:ALL) NOPASSWD:ALL
      

      You probably already have a similar one, either for your user or for a certain group (usually wheel), just need to add the NOPASSWD part.

      As for the other parts you can configure the computer to not lock the screen (just turn it off) and for updates it depends on distro/DE but having passwordless sudo allows you to update via the terminal without password (although it should be possible to configure the GUI to work passwordless too)

    • shadowintheday2@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      5 months ago

      You can configure this behavior for CLI, and by proxy could run GUI programs that require elevation through the CLI:

      https://wiki.archlinux.org/title/Sudo#Using_visudo

      Defaults passwd_timeout=0(avoids long running process/updates to timeout waiting for sudo password)

      Defaults timestamp_type=global (This makes password typing and it’s expiry valid for ALL terminals, so you don’t need to type sudo’s password for everything you open after)

      Defaults timestamp_timeout=10(change to any amount of minutes you wish)

      The last one may be the difference between having to type the password every 5 minutes versus 1-2 times a day. Make sure you take security implications into account.

    • teawrecks@sopuli.xyz
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      5 months ago

      For wake from screensaver/sleep, this should be configurable. Your window manager is locking your session, so you probably just need to turn that option off.

      For installations and updates, I suspect you’re used to Windows-style UAC where it just asks you Yes or No for admin access in a modal overlay. As I understand it, this is easier said than done on linux due to an insistence on never running GUI applications as admin, which makes sense given how responsibilities are divided and the security and technical challenges involved. I will say, I agree 100% that this is a serious area that’s lacking for linux, but I also (think I) understand why no one has implemented something similar to UAC. I’ll try to give the shortest version I can:

      All programs (on both Windows and Linux) are run as a user. It’s always possible for any program to have a bug in it that gives another program to opportunity to exploit the bug to hijack that program, and start executing arbitrary, malicious code as that user. For this reason, the philosophical stance on all OSes is, if it’s gonna happen, let’s not give them admin access to the whole machine if we can avoid it, so let’s try to run as much as possible as an unprivileged user.

      On linux, the kernel-level processes and admin (root-level) account are fundamentally detached from running anything graphical. This means that it’s very hard to securely, and generically, pop up a window with just a Yes or No box to grant admin-level permissions. You can’t trust the window manager, it’s also unprivileged, but even if you could, it might be designed in a supremely insecure way, and allow just any app with a window to see and interact with any other app’s windows (Xorg), so it’s not safe to just pop up a simple Yes/No box, because then any other unprivileged application could just request root permissions, and then click Yes itself before you even see it. Polkit is possible because even if another app can press OK, you still need to enter the password (it’s not clear to me how you avoid other unprivileged apps from seeing the keystrokes typed into the polkit prompt).

      On windows, since the admin/kernel level stuff is so tightly tied to the specific GUI that a user will be using, it can overlay its own GUI on top of all the other windows, and securely pop in to just say, “hey, this app wants to run as admin, is that cool?” and no other app running in user mode even knows it’s happening, not even their own window manager which is also running unprivileged. The default setting of UAC is to just prompt Yes/No, but if you crank it to max security you get something like linux (prompt for the password every time), and if you crank it to lowest security you get something closer to what others are commenting (disable the prompt, run things as root, and cross your fingers that nothing sneaks in).

      I do think that this is a big deal when it comes to the adoption of linux over windows, so I would like to see someone come up with a kernel module or whatever is needed to make it happen. If someone who knows linux better than me can correct me where I’m wrong, I’d love to learn more, but that is how I understand it currently.

  • Tovervlag@feddit.nl
    link
    fedilink
    arrow-up
    6
    ·
    5 months ago

    Ctrl Alt f1 f2 etc. Why do these desktops/cli exist. What was their intended purpose and what do people use them for today? Is it just legacy of does it stll serve a purpose?

    • d3Xt3r@lemmy.nzM
      link
      fedilink
      arrow-up
      13
      ·
      edit-2
      5 months ago

      To add to what @bloodfart wrote, the history of TTYs (or virtual consoles) goes all the way back to the early days of computing and teletypewriter machines.

      In the old days, computers were gigantic, super expensive, and operated in batch mode. Input was often provided through punched cards or magnetic tape, and output was printed on paper. As interactive computing developed, the old teletypewriters (aka TTYs) were repurposed from telecommunication, to serve as interactive terminals for computers. These devices allowed operators to type commands and receive immediate feedback from the computer.

      With advancements in technology, physical teletypewriters were eventually replaced by electronic terminals - essentially keyboards and monitors connected to the mainframe. The term “TTY” persisted, however, now referring to these electronic terminals.

      When Unix came out in the 70s, it adopted the TTY concept to manage multiple interactive user sessions simultaneously. As personal computing evolved, particularly with the introduction of Linux, the concept of virtual consoles (VCs) was introduced. These were software implementations that mimicked the behavior of physical terminals, allowing multiple user sessions to be managed via a single physical console. This was particularly useful in multi-user and server environments.

      This is also where the term “terminal” or “console” originates from btw, because back in the day these were physical terminals/consoles, later they referred to the virtual consoles, and now they refer to a terminal app (technically called a “terminal emulator” - and now you know why they’re called an “emulator”).

      With the advent of graphical interfaces, there was no longer a need for a TTY to switch user sessions, since you could do that via the display manager (logon screen). However, TTYs are still useful for offering a reliable fallback when the graphical environment fails, and also as a means to quickly switch between multiple user sessions, or for general troubleshooting. So if your system hangs or crashes for whatever reason - don’t force a reset, instead try jumping into a different TTY. And if that fails, there’s REISUB.

      • Tovervlag@feddit.nl
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        thanks, I enjoyed reading that history. I usually use it when something hangs on the desktop as you said. :)

    • bloodfart@lemmy.ml
      link
      fedilink
      arrow-up
      9
      ·
      5 months ago

      Each one is a virtual terminal and you can use them just like any other terminal. They exist because the easiest way to put some kind of a interactive display up is to just write text to a framebuffer and that’s exactly what your computer does when it boots and shows all that scrolling stuff. The different ones are just different framebuffers that the video card is asked to display when you push ctrl-alt-fnumber. You can add more or disable them altogether if you like.

      Years ago my daily driver was a relatively tricked out compaq laptop and I used a combination of the highest mode set I could get, tmux and a bunch of curses based utilities to stay out of x for as much of the time as I could.

      I mean, each vt had a slightly different colored background image, the text colors were configured, it was slick.

      I used to treat them like multiple desktops.

      With libcaca I was even able to watch movies on it without x.

      I still use them when x breaks, which did happen last year to my surprise. If your adapter supports a vesa mode that’s appropriate to your monitor then you can use one with very fresh looking fonts and have everything look clean. Set you a background image and you’re off to the races with ncurses programs.

    • ArcaneSlime@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      8
      ·
      5 months ago

      If your system is borked sometimes you can boot into those and fix it. I’m not yet good enough to utilize that myself though, I’m still fairly new to linux too.

    • Elsie@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      5 months ago

      They are TTYs, they’re like terminals your computer spawns at boot time that you can use. Their intended purpose is really whatever you need them for. I use them for if I somehow mess up my display configuration and I need to access a terminal, but I can’t launch my DE/WM.

    • Presi300@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 months ago

      Mostly for headless systems, servers and such. That and debugging, if your desktop breaks/quits working for some reason, you need some way to run multiple things at once…

  • HATEFISH@midwest.social
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    5 months ago

    How can I run a sudo command automatically on startup? I need to run sudo alsactl restore to mute my microphone from playing In my own headphones on every reboot. Surely I can delegate that to the system somehow?

    • baseless_discourse@mander.xyz
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      5 months ago

      If you run a systemd distro (which is most distro, arch, debian, fedora, and most of their derivatives), you can create a service file, which will autostart as root on startup.

      The service file /etc/systemd/system/<your service>.service should like

      [Unit]
      Description=some description
      
      [Service]
      ExecStart=alsactrl restore
      
      [Install]
      WantedBy=multi-user.target
      

      then

      systemctl enable <your service>.service --now
      

      you can check its status via

      systemctl status <your service>.service
      

      you will need to change <your service> to your desired service name.

      For details, read: https://linuxhandbook.com/create-systemd-services/

      • HATEFISH@midwest.social
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        This one seemed perfect but nothing lasts after the reboot for whatever reason. If i manually re-enable the service its all good so I suspect theres no issue with the below - I added the after=multi-user.target after the first time it didn’t hold after reboot.

        
        [Unit]
        Description=Runs alsactl restore to fix microphone loop into headphones
        After=multi-user.target
        [Service]
        ExecStart=alsactl restore
        
        [Install]
        WantedBy=multi-user.target
        

        When I run a status check it shows it deactivates as soon as it runs

        Apr 11 20:32:24 XXXXX systemd[1]: Started Runs alsactl restore to fix microphone loop into headphones.
        Apr 11 20:32:24 XXXXX systemd[1]: alsactl-restore.service: Deactivated successfully.
        
          • HATEFISH@midwest.social
            link
            fedilink
            arrow-up
            2
            ·
            5 months ago

            It seems to have no effect either way. Originally I attempted without, then when it didn’t hold after a reboot and some further reading I added the After= line in attempt to ensure the service isn’t trying to initiate before it should be possible.

            I can manually enable the service with or without the After= line with the same results of it actually working. Just doesn’t hold after a reboot.

            • baseless_discourse@mander.xyz
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              5 months ago

              That is interesting. BTW, I don’t assume that command will run forever right, i.e. it will terminate relatively soon? so that could be why the service is deactivated, not because it is not run. You can try to add ; echo "command terminated" at the end of ExecStart to see if it is terminated, you can also try to echo the exit code to debug.

              If the program you use has a verbose mode, you can also try to turn it on to see if there is any error. EDIT: indeed, alsactrl restore --debug

              There is also a possiblity that this service is run before the device you need to restore is loaded, so it won’t have any effect.

              On a related note, did you install the program via your package manager, and what distro are you running. Because sometimes SELinux will block the program running. But the error message will say permission denied, instead of your message.

    • Cyclohexane@lemmy.mlOPM
      link
      fedilink
      arrow-up
      6
      ·
      5 months ago

      Running something at start-up can be done multiple ways:

      • look into /etc/rc.d/rc.local
      • systemd (or whatever init system you use)
      • cron job
    • Hiro8811@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      5 months ago

      Try paveaucontrol, it has an option to lock settings plus it’s a neat app to call when you need to customise settings. You could also add user to the group that has access to mic.

    • wolf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 months ago

      You got some good answers already, here is one more option: Create a *.desktop file to run sudo alsactrl, and copy the *.desktop file ~/.config/autostart (Might need to configure sudo to run alsactrl w/o password.)

      IMHO the cleanest option is SystemD.

  • crazyCat@sh.itjust.works
    link
    fedilink
    arrow-up
    5
    ·
    5 months ago

    I use Kali Linux for cybersecurity work and learning in a VM on my Windows computer. If I ever moved completely over to Linux, what should I do, can I use Kali as my complete desktop?

    • bloodfart@lemmy.ml
      link
      fedilink
      arrow-up
      10
      ·
      5 months ago

      Kali is a very bad choice as a desktop or daily driver. It’s intended to be used as a toolkit for security work and so it doesn’t prioritize the needs of normal desktop use in either package management, defaults or patch updates.

      If you ever switched to Linux, pick a distribution you can live with and run kali in a vm like you’re doing now.

      Think of it this way: you wouldn’t move into a shoot house, mechanics garage or escape room, would you?

        • bloodfart@lemmy.ml
          link
          fedilink
          arrow-up
          4
          ·
          5 months ago

          I used it as an installed desktop environment at a workbench in a non security context for a year. It was a pain in the butt in like a million ways.

          Even when I used the tools kali ships with regularly I either dual booted or ran it inside a vm.

          If you wanna understand why every time someone asks about using kali as a daily driver even on their own forums, a bunch of people pop up and say it’s a bad idea, give it a shot sometime.

          • crazyCat@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            Ha no worry, I believe all you guys now and wouldn’t do it, and would just use a VM. Thank you for the insight.

        • d3Xt3r@lemmy.nzM
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          5 months ago

          You can just install the tools you want on your host OS. But if it’s like hundreds of tools then yeah makes more sense to run it inside a VM, just so it’s all nice and separate from your daily-driver. And you may think it’s funny but the performance of Linux-on-Linux is actually pretty good, and there isn’t much of a RAM/CPU overhead either. And if you’re really strapped for RAM, you could use KSM (kernel samepage merging) and ballooning.

          Many Linux users use VMs (or containers) for separate workloads, and it’s a completely normal thing to do. For instance, on my homelab box, my host OS is my daily-driver, but all my lab stuff (Kubernetes, Ansible etc) all run under VMs. The performance is so good that you won’t even notice/care that it’s running on a VM. This is all thanks to the Linux/KVM/QEMU/libvirt stack, if it were something else like VMWare or VBox, it’d be a lot more clunkier and you can feel that it’s running on a VM - but that’s not the case with KVM.

    • foremanguy@lemmy.ml
      link
      fedilink
      arrow-up
      5
      ·
      5 months ago

      No never! Do not use Kali as main OS choose Debian, Fedora, RHEL (not designed for this use case) or Arch system

      • crazyCat@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        Oh very cool thank you. In one way I meant more simply just if Kali is decent as a daily driver complete desktop, rather than just as a specialized toolkit.

    • Captain Aggravated@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 months ago

      Kali Linux is a pretty specific tool, it’s not suited for use as a daily driver desktop OS.

      It is my understanding that Kali is based on Debian with an xfce desktop, so if you want a similar experience (same GUI, same package manager) in a daily driver OS, you can start there.

    • Presi300@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 months ago

      Short answer: no

      Long aswer: Kali, as a desktop is just half broken debian with a theme and a bunch of bloatware preinstalled… Even if your host is linux, you should still run Kali in a VM.

    • neidu2@feddit.nl
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      Short answer: yes

      Longer answer: Kali is not intended to be a normal desktop OS. It will work, but ut might be a bit limiting.

      If you want a desktop linux with a lot of the security stuff with it, you might want to check out ParrotSec. I used that on my work laptop for a few years.

  • Kuvwert@lemm.ee
    link
    fedilink
    arrow-up
    5
    ·
    5 months ago

    I installed Debian today. I’m terrified to do anything. Is there a single button backup/restore I can depend on when I ultimately fuck this up?

    • makingStuffForFun@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      I ran Linux in a vm and destroyed it about… 5 times. It allowed me to really get in and try everything. Once I rana command that removed everything, and I remember watching icons disappear as the destruction unfolded in front of me. It was kind of fun.

      I have everything backed up and synced so it’s all fine. Just lots of reinstalling Thunderbird, Firefox, re logging into firefox sync, etc.

      Once I stopped destroying everything I did a proper install and haven’t looked back.

      This will be my 7th year on Linux now. And I have to say, it feels good to be free.

    • wolf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Another perspective: Your question implies you want to try out things with Debian. If this assumption is correct, I would highly recommend you just create a virtual machine with qemu/libvirt and learn within this environments/try out things there before doing stuff ‘on the metal’.

      Of course backups are always a good idea and once you got your feed wet you might want to learn about ‘Infrastructure as code’. Have fun!

      • Kuvwert@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        That’s a fantastic suggestion and I’ve already been doing exactly this :) but, I’ve done it just enough to know that I’m really really good at breaking stuff, and I don’t want to wait to fully transition from windows. Hence the need for full system backups

    • baseless_discourse@mander.xyz
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      Install everything from store, and you should be fine. If you see a tutorial being too complicated, it is probably not worth following. Set your search engine to past year and see if there are better tutorials.

      You might also want to consider atomic distros, they are much harder to mess up, and much easier to restore.

      • Kuvwert@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        No I’m doing it to learn self hosting, I’m doing the hard stuff on purpose

        • baseless_discourse@mander.xyz
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          5 months ago

          Oh! in that case may I suggest yachts with docker containers? https://yacht.sh/

          Everything on my homeserver is directly installed on the server, keeping them up-to-date is pretty annoying, and permission control is completely non-existent.

          Since want to do things the hard way, I believe this can also be a good opportunity to do things in the “better” way (at least IMO).

          • Kuvwert@lemm.ee
            link
            fedilink
            arrow-up
            2
            ·
            5 months ago

            Ah now that does look promising, I had settled on portainer but this yacht program looks very noob friendly! I’ll install it today and check it out! Cheers!

    • Julian@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 months ago

      /bin, since that will include any basic programs (bash, ls, cd, etc.).

    • SmashFaster@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      5 months ago

      There is no direct equivalent, system32 is just a collection of libraries, exes, and confs.

      Some of what others have said is accurate, but to explain a bit further:

      Longer explanation:

      spoiler

      system32 is just some folder name the MS engineers came up back in the day.

      Linux on the other hand has many distros, many different contributors, and generally just encourages a … better … separation for types of files, imho

      The linux filesystem is well defined if you are inclined to research more about it.
      Understanding the core principals will make understanding virtually everything else about “linux” easier, imho.

      https://tldp.org/LDP/intro-linux/html/sect_03_01.html

      tl;dr; “On a UNIX system, everything is a file; if something is not a file, it is a process.”

      The basics:

      • /bin - base level executables, ls, mv, things like that
      • /sbin - super-level-only (root) executables, parted, reboot, etc
      • /lib - Somewhat self-explanatory, holds libraries, lots of things put their libs here, including linux kernel modules, /lib/modules/*, similar to system32’s function of holding critical libraries
      • /etc - Configuration lives here, generally speaking, /etc/<application name> can point you in the right direction, typically requires super-user (root) to edit
      • /usr - “User installed” software, which can be a murky definition in today’s world, but lots of stuff ends up here for installed software, manuals, icon files, executables

      Bonus:

      • /opt - A special location, generally third-party, bundled-style software likes to use this, Java for instance, but historically some admins use it as the “company location”, meaning internally developed software would live there.
      • /srv - Largely subjective, but myself and others I know use it for partitions that are outside the primary disk, for instance we use /srv/db for database volumes, /srv/www for web-data volumes, /srv/Media for large-file storage, etc, etc

      For completeness:

      • /home - You’ll find your user directories here, personally, this is my directory I backup, I don’t carry much more with me on most systems.
      • /var - “Variable data”, basically meaning any data that will likely grow over time, eg: /var/log
    • NaN@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 months ago

      Don’t think there is.

      system32 holds files that are in various places in Linux, because Windows often puts libraries with binaries and Linux shares them.

      The bash in /bin depends on libraries in /lib for example.

    • Captain Aggravated@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      As in, the directory in which much of the operating system’s executable binaries are contained in?

      They’ll be spread between /bin and /sbin, which might be symlinks to /usr/bin and /usr/sbin. Bonus points is /boot.

    • ogeist@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      5 months ago

      For the memes:

      sudo rm -rf /*

      This deletes everything and is the most popular linux meme

      The same “expected” functionality:

      sudo rm -rf /bin/*

      This deletes the main binaries. You kinda can recover here but I have never done it.

  • wanghis_khan@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    5 months ago

    NixOS. I don’t get what it really is or does? It’s a Linux distribution but with ceavets or something

    • exu@feditown.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 months ago

      It’s a distribution completely centered around the Nix package manager. This basically allows you to program how your system should look using one programming language. If you want an identical system, just copy that file and you’re set.

      • ReakDuck@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        I remember that thr kernel didn’t had performance flags set and used, making NixOS not a nice Gaming platform.

        Is this true? Can I fix it for myself easily?

        • pineapplelover@infosec.pub
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 months ago

          Easily? I’ve heard it’s really time consuming to get it exactly how you like it but the same could be said about a lot of distros.

        • exu@feditown.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          Are you talking about that vm.max_memory something?
          Not sure how you’d change it in Nix exactly, but should be simple enough.

      • ReakDuck@lemmy.ml
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        5 months ago

        I remember that thr kernel didn’t had performance flags set and used, making NixOS not a nice Gaming platform.

        Is this true? Can I fix it for myself easily?

      • ReakDuck@lemmy.ml
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        5 months ago

        I remember that thr kernel didn’t had performance flags set and used, making NixOS not a nice Gaming platform.

        Is this true? Can I fix it for myself easily?

      • ReakDuck@lemmy.ml
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        5 months ago

        I remember that thr kernel didn’t had performance flags set and used, making NixOS not a nice Gaming platform.

        Is this true? Can I fix it for myself easily?

    • featured [he/him, comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 months ago

      Instead of installing packages through a package manager one at a time and configuring your system by digging into individual config files, NixOS has you write a single config file with all your settings and programs declared. This lets you more easily configure your system and have a completely reproducible system by just copying your nix files to another nixos machine and rebuilding.

      It’s also an immutable distribution, so the base system files are only modified when rebuilding the whole system from your config, but during runtime it’s read only for security and stability.