• 2 Posts
  • 330 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle















  • the NC is rarely available or only for a few seconds which screws my automatic backup of photos. This is annoying. I think it is because there are two conflicting routes to the NC, one via the internal IPv4 and the other over the publicly available IPv6.

    Sounds unlikely tbh. A TCP connection is established for a specific target address which stays the same for the duration of that connection, and there is pretty much no interaction between IPv4 and IPv6 in the first place. Have you run Wireshaek? Is it the same problem from other clients in the network? Have you tried explicitly connecting to the IPv6 address and the IPv4 address to see if it’s a specific one that’s not working?


  • But it actually doesnt. Most public wifis or other residential networks dont seem to give me external access to my Nextcloud, ironically, my mobile network via phone does.

    A lot of those networks are run by boomers who don’t care about IPv6 or don’t want to set it up because (insert excuse from IPv6 Bingo) or non-tech people whose router doesn’t turn it on automatically. So yeah, that is unfortunately something you have to expect and work around.

    Problem 1 seems to be best solved with renting the cheapest VPS I can find and then…build a permanent SSH tunnel to it? Use the WireGuard VPN of my router? Some other kind of tunnel to expose a public IPv4? Iirc, VPS are billed by throughput, I am not sure if I might run into problems here, but the only people that use it are my gf and me, and when not at home, mostly for the CalDAV stuff.

    You don’t even need a tunnel. Just a proxy on a VPS that runs on IPv4 and connects to the IPv6 upstream. Set the AAAA record to the real host and the A record to the VPS. Assuming you actually get a static prefix which you should, but some IPv4-brained ISPs don’t and you get a rotating prefix, in which case it’s probably more annoying.

    I do this too, mine runs on a free Oracle Cloud ARM VPS.



  • The disks are the most uggo part. They’re a bunch of old disks of varying sizes with a RAID+LVM setup to make the most use of them while still being redundant.

    lsblk output of the whole thing
    saiko@vineta ~ % lsblk
    NAME                    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
    sda                       8:0    0 111.8G  0 disk  
    ├─sda1                    8:1    0   512M  0 part  /Volumes/Boot
    └─sda2                    8:2    0 111.3G  0 part  /nix/store
                                                       /
    sdb                       8:16   1 372.6G  0 disk  
    └─sdb1                    8:17   1 372.6G  0 part  
      └─md1                   9:1    0   1.5T  0 raid5 
        └─storagevg-storage 254:0    0   6.3T  0 lvm   /Volumes/storage
    sdc                       8:32   1 465.8G  0 disk  
    ├─sdc1                    8:33   1 372.6G  0 part  
     └─md1                   9:1    0   1.5T  0 raid5 
       └─storagevg-storage 254:0    0   6.3T  0 lvm   /Volumes/storage
    └─sdc2                    8:34   1  93.1G  0 part  
      └─md2                   9:2    0 279.3G  0 raid5 
        └─storagevg-storage 254:0    0   6.3T  0 lvm   /Volumes/storage
    sdd                       8:48   1   4.5T  0 disk  
    ├─sdd1                    8:49   1 372.6G  0 part  
     └─md1                   9:1    0   1.5T  0 raid5 
       └─storagevg-storage 254:0    0   6.3T  0 lvm   /Volumes/storage
    ├─sdd2                    8:50   1  93.1G  0 part  
     └─md2                   9:2    0 279.3G  0 raid5 
       └─storagevg-storage 254:0    0   6.3T  0 lvm   /Volumes/storage
    ├─sdd3                    8:51   1 465.8G  0 part  
     └─md3                   9:3    0 931.3G  0 raid5 
       └─storagevg-storage 254:0    0   6.3T  0 lvm   /Volumes/storage
    └─sdd4                    8:52   1   3.6T  0 part  
      └─md4                   9:4    0   3.6T  0 raid1 
        └─storagevg-storage 254:0    0   6.3T  0 lvm   /Volumes/storage
    sde                       8:64   1   7.3T  0 disk  
    ├─sde1                    8:65   1 372.6G  0 part  
     └─md1                   9:1    0   1.5T  0 raid5 
       └─storagevg-storage 254:0    0   6.3T  0 lvm   /Volumes/storage
    ├─sde2                    8:66   1  93.1G  0 part  
     └─md2                   9:2    0 279.3G  0 raid5 
       └─storagevg-storage 254:0    0   6.3T  0 lvm   /Volumes/storage
    ├─sde3                    8:67   1 465.8G  0 part  
     └─md3                   9:3    0 931.3G  0 raid5 
       └─storagevg-storage 254:0    0   6.3T  0 lvm   /Volumes/storage
    └─sde4                    8:68   1   3.6T  0 part  
      └─md4                   9:4    0   3.6T  0 raid1 
        └─storagevg-storage 254:0    0   6.3T  0 lvm   /Volumes/storage
    sdf                       8:80   1 931.5G  0 disk  
    ├─sdf1                    8:81   1 372.6G  0 part  
     └─md1                   9:1    0   1.5T  0 raid5 
       └─storagevg-storage 254:0    0   6.3T  0 lvm   /Volumes/storage
    ├─sdf2                    8:82   1  93.1G  0 part  
     └─md2                   9:2    0 279.3G  0 raid5 
       └─storagevg-storage 254:0    0   6.3T  0 lvm   /Volumes/storage
    └─sdf3                    8:83   1 465.8G  0 part  
      └─md3                   9:3    0 931.3G  0 raid5 
        └─storagevg-storage 254:0    0   6.3T  0 lvm   /Volumes/storage
    sr0                      11:0    1  1024M  0 rom