• 0 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: November 5th, 2023

help-circle
  • If your NAS has enough resources the happy(ish) medium is to use your NAS as a hypervisor. The NAS can be on the bare hardware or its own VM, and the containers can have their own VMs as needed.

    Then you don’t have to take down your NAS when you need to reboot your container’s VMs, and you get a little extra security separation between any externally facing services and any potentially sensitive data on the NAS.

    Lots of performance trade offs there, but I tend to want to keep my NAS on more stable OS versions, and then the other workloads can be more bleeding edge/experimental as needed. It is a good mix if you have the resources, and having a hypervisor to test VMs is always useful.



  • If you are just using a self signed server certificate anyone can connect to your services. Many browsers/applications will fail to connect or give a warning but it can be easily bypassed.

    Unless you are talking about mutual TLS authentication (aka mTLS or two way ssl). With mutual TLS in addition to the server key+cert you also have a client key+cert for your client. And you setup your web server/reverse proxy to only allow connections from clients that can prove they have that client key.

    So in the context of this thread mTLS is a great way to protect your externally exposed services. Mutual TLS should be just as strong of a protection as a VPN, and in fact many VPNs use mutual TLS to authenticate clients (i.e. if you have an OpenVPN file with certs in it instead of a pre-shared key). So they are doing the exact same thing. Why not skip all of the extra VPN steps and setup mTLS directly to your services.

    mTLS prevents any web requests from getting through before the client has authenticated, but it can be a little complicated to setup. In reality basic auth at the reverse proxy and a sufficiently strong password is just as good, and is much easier to setup/use.

    Here are a couple of relevant links for nginx. Traefik and many other reverse proxies can do the same.

    How To Implement Two Way SSL With Nginx

    Apply Mutual TLS over kubernetes/nginx ingress controller


  • The biggest question is, are you looking for Dolby Vision support?

    There is no open source implementation for Dolby Vision or HDR10+ so if you want to use those formats you are limited to Android/Apple/Amazon streaming boxes.

    If you want to avoid the ads from those devices apart from side loading apks to replace home screens or something the only way to get Dolby Vision with Kodi/standard Linux is to buy a CoreELEC supported streaming device and flashing it with CoreELEC.

    List of supported devices here

    CoreELEC is Kodi based so it limits your player choice, but there are plugins for Plex/Jellyfin if you want to pull from those as back ends.

    Personally it is a lot easier to just grab the latest gen Onn 4k Pro from Walmart for $50 and deal with the Google TV ads (never leave my streaming app anyways). Only downside with the Onn is lack of Dolby TrueHD/DTS Master audio output, but it handles AV1, and more Dolby Vision profiles than the Shield does at a much cheaper price. It also handles HDR10+ which the Shield doesn’t but that for at isn’t nearly as common and many of the big TV brands don’t support it anyways.


  • All of the “snooping” is self contained. You run the network controller either locally on a PC, or on one of their dedicated pieces of hardware (dream machine/cloud key).

    All of the devices connect directly to your network controller, no cloud connections. You can have devices outside of your network connected to your network controller (layer 3 adoption), but that requires port forwarding so again it is a direct connection to you.

    You can enable cloud access to your network controller’s admin interface which appears to be some sort of reverse tunnel (no port forwarding needed), but it is not required. It does come in handy though.

    As far as what “snooping” there is, there is basic client tracking (what IP/mac/hostnames) to show what is connected to your network. The firewall can track basics like bandwidth/throughout, and you can enable deep packet inspection which classifies internet destinations (streaming/Amazon/Netflix sort of categories). I don’t think that classification reaches out to the internet but that probably needs to be confirmed.

    All of their devices have an SSH service which you can login to and you have pretty wide access to look around the system. Who knows what the binaries are doing though.

    I know some of their WISP (AirMAX) hardware for long distance links has automatic crash reporting built in which is opt out. There is a pop up to let you know when you first login. No mention of that on the normal Unifi hardware, but they might have it running in the background.

    I really like their APs and having your entire network in the network controller is really nice for visibility but my preference is to build my own firewall that I have more control over and then Unifi APs for wireless. If I were concerned about the APs giving out data, I know I could cut that off at the firewall easily.

    A lot of the Unifi APs can have OpenWRT flashed on them, but the latest Wifi7 APs might be too locked down.


  • I am not a SAN admin but work closely with them. So take this with a grain of salt.

    Best practice is always going to be to split things into as many failure domains as possible. The main argument being how would you test upgrades to the switch firmware without potentially affecting production.

    But my personal experience says that assuming you have a typical A/B fabric that is probably enough to handle those sorts of problems, especially if you have director class switches where you have another supervisor to fail back to.

    I’ve personally seen shared dev/prod switches for reasonably large companies (several switches with ~150 ports lit on each switch), and there were never any issues.

    If you want to keep a little separation between dev and prod keep those on different VSANs which will force you to keep the zones separated.

    Depending on how strict change management is for your org keep in mind that tangling dev+prod might make your life worse in other ways. i.e. you can probably do switch firmware updates/zoning changes/troubleshooting in dev during work hours but as soon as you connect those environments together you may have to do all of that on nights and weekends.


  • Before I had the helmet units I used to ride with in-ear wired headphones. Those worked pretty well but as you put the helmet on it often pulled at least one side slightly out of your ear making a worse seal. In ear means you also get some hearing protection as well which it is always good to have, especially at highway speeds or if you have loud exhaust. I will say the audio quality/clarity/volume going this route is unbeatable. If all you want is music finding an in-ear solution will be much better than speakers in your helmet. Especially if you have a loud exhaust.

    You can get relatively high quality wired headphones for quite cheap that are very low profile.

    I would think any wireless headphone like airpods would probably be too big to stay in when you put your helmet on, but you might be able to make it work.

    One issue with headphones is that it is probably not legal in most placese, so be cautious about that. You can easily rip them out before the police would notice anyways, but still a risk.

    If you go the Sena/Cardo route you should consider hearing protection as well. I usually use the foam inserts. It sounds counterintuitive but having the earplugs in actually makes the speakers easier to hear. They tend to filter the wind noise but the more direct sound from the speakers can get through.


  • I’ve run both Cardo (Packtalk Slim, Packtalk Edge), Sena (30k), and some $30 Amazon units and very much prefer Cardo units.

    The first thing to do is ask any friends you might ride with what they use. It’s been a while since I tried, but getting Sena to pair with anything other than Sena used to be quite a pain.

    The Sena units hardware seems a little better, and the phone software may be a little more polished, but the actual comms are terrible. Filtering of background noise is nowhere near as good as Cardo, and general audio clarity is much better on Cardo.

    Probably fixed by now but all sorts of problems with mesh pairing and pairing in general on the Senas.

    Cardos work well enough, still a bit of a pain to get things paired sometimes, but you can do the whole process from the phone app so you don’t need to know button combos.

    Cardo’s Bluetooth bridge capability is awesome if you have a group of Cardo users and the occasional Amazon user that you want to bring into the mesh. The mesh connects all of your Cardo units together and each Cardo unit can use its Bluetooth bridge to bring in one normal Bluetooth headset. I don’t recall if we could get this working to bring Sena units into the group but I have seen release notes saying they improved compatibility so maybe it works now.

    I know a couple of people with Shoei helmets with the built in Sena hookups so they get forced into Sena. Seems like the hardware for the integrated units lags behind (took them a few extra years to get a mesh option), and you are certainly locking yourself in. I would personally prefer to not have myself locked in.

    If you plan to ride with multiple people getting everyone on the same system especially with mesh units is a must. We just have one big mesh group with everyone we know and as soon as you meet up everything auto connects and is good to go.

    The $30 Amazon units actually work well too. Probably not remotely waterproof, and they can usually only connect to one rider at a time, but for the price you might want to start there.

    Video is old now but maybe still relevant: https://youtu.be/-AMoXbXHALc

    He recommends the Packtalk Slim because it is so low profile, but the new Packtalk Edge is removable, has a lower profile than the old Bolds, uses USB-C, fixes most of his problems with the removable ones and lets you charge without tethering directly to your helmet.

    The slims are also little less waterproof because they are more complicated with more wires in and out. The edge being self contained, I would expect to have much better water resistance (haven’t had to take my edges apart yet so I can’t say that for sure).

    I’ve had a group of us with slims in pouring rain for three hours and they did survive, one or two had some issues with buttons afterwards but they still functioned. Not sure I can even blame it on the units as we had those universal magnetic charger buttons plugged in which kept the little rain cover open at the back.

    Also had one Slim unit where the microphone cable went out, not perfect, but warranty is usually pretty good. Again a win for the Edge/Bold removable units because those cables are separate from the unit itself so you could buy a helmet kit if it fails out of warranty.


  • Like most have said it is best to stay away from ZFS deduplication. Especially if your data set is media the chances of an entire ZFS block being the same as any other is small unless you somehow have multiple copies of the same content.

    Imagine two mp3s with the exact same music content but with slightly different artist metadata. A single bit longer or shorter at the beginning of the file and even if the file spans multiple blocks ZFS won’t be able to duplicate a single byte. A single bit offsetting the rest of the file just a little is enough to throw off the block checksums across every block in the file.

    To contrast with ZFS, enterprise backup/NAS appliances with deduplication usually do a lot more than block level checks. They usually check for data with sliding window sizes/offsets to find more duplicate data.

    There are still some use cases where ZFS can help. Like if you were doing multiple full backups of VMs. A VM image has a fixed size so the offset issue above isn’t an issue, but if beware that enabling deduplication for even a single ZFS filesystem affects the entire pool, even ZFS filesystems that have deduplication disabed. The deduplication table is global for the pool and once you have turned it on you really can’t get rid of it. If you get into a situation where you don’t have enough memory to keep the deduplication table in memory ZFS will grind to a halt and the only way to completely remove deduplication is to copy all of your data to a new ZFS pool.

    If you think this feature would still be useful for you, you might want to wait for 2.3 to release (which isn’t too far off) for the new fast dedup feature which fixes or at least prevents a lot of the major issues with ZFS dedup

    More info on the fast dedup feature here https://github.com/openzfs/zfs/discussions/15896


  • Third party solutions can at least partially fix this. I have this site: https://spotifyshuffler.com/ create a shuffled copy of my playlists occasionally. Then you just play the pre-shuffled playlist with shuffle disabled.

    In my case I have a large (several thousand track) playlist, and I turn on Spotify’s shuffle just to pick the first track at a somewhat random spot in the large list and then shut their shuffle off toward continue the pre-shuffled list without their manipulation. Whenever I add content to the playlist I have it reshuffled.


  • I assume you are powering the dock? Many docks require external power before they will pass video.

    Does the screen on the deck shut off or stay active?

    If the screen stays active that means that it isn’t detecting an HDMI signal through the dock at all.

    If the screen shuts off but you get no video through the receiver, you should try hitting the power button one to shut it off wait a few seconds then turn it back on (while plugged in). Even the official dock has issues getting the deck to switch to the external output but putting the deck to sleep and back on gets it sorted out.

    If that still doesn’t do it plug in directly to your TV to narrow down the problem (removes the receiver as a variable). Next try a different HDMI cable, and as a last resort try a different dock. If you know someone else with their own deck you can try theirs to eliminate a hardware failure on your deck.



  • Contrary to a lot of posts that I have seen, I would say ZFS isn’t pointless with a single drive. Even if you can’t repair corruption with a single drive knowing something is corrupt in the first place is even more important (you have backups to restore it from right?).

    And a ZFS still has a lot of features that are useful regardless. Like snapshots, compression, reflinks, send/receive, and COW means no concerns about data loss during a crash.

    BTRFS can do all of this too and I believe it is better about low memory systems but since you have ZFS on your NAS you unlock a lot of possibilities keeping them the same.

    I.e. say you keep your T110ii running with ZFS you can use tools like syncoid to periodically push snapshots from the Optiplex to your T110.

    That way your Optiplex can be a workhorse, and your NAS can keep the backup+periodic snapshots of the important data.

    I don’t have any experience with TrueNAS in particular but it looks like syncoid works with it. You might need to make sure that pool versions/flags are the same for sending/receive to work.

    Alternatively keep that data on an NFS mount. The SSD in the Optiplex would just be for the base OS and wouldn’t have any data that can’t be thrown away. The disadvantage here being your Optiplex now relies on a lot more to keep running (networking + nas must be online all the time).

    If you need HA for the VMs you likely need distributed storage for the VMs to run on. No point in building an HA VM solution if it just moves the single point of failure to your NAS.

    Personally I like Harvester, but the minimum requirements are probably beyond what your hardware can handle.

    Since you are already on TrueNAS Scale have you looked at using TrueNAS Scale on the Optiplex with replication tasks for backups?





  • Closest to this I’ve worked was a convenience store which included a deli.

    In that context the way I would have seen it was that he probably would have come in and bought them anyways, so the only difference to me would be sticking them in three bags vs one. No different than anyone else asking to cut their pizza a different way or whatever other minor out of the ordinary changes customers wanted.

    If we were swamped with orders then yeah I wouldn’t be happy about it, but you get over it and move on that is part of working retail.



  • If you are just looking to repurpose an old device for around the house use and it won’t ever be leaving your home network, then the simplest method is to set a static IP address on the device and leave the default gateway empty. That will prevent it from reaching anything other than the local subnet.

    If you have multiple subnets that the device needs to access you will need a proper firewall. Make sure that the device has a DHCP reservation or a static IP and then block outgoing traffic to the WAN from that IP while still allowing traffic to your local subnets.

    If it is a phone who knows what that modem might be doing if there isn’t a hardware switch for it. You can’t expect much privacy when that modem is active. But like the other poster mentiond a private DNS server that only has records from your local services would at least prevent apps from reaching out as long as they aren’t smart enough to fall back to an IP address if DNS fails.

    A VPN for your phone with firewall rules on your router that prevent your VPN clients from reaching the WAN would hopefully prevent any sort of fallback like that.


  • If you are accessing your files through dolphin on your Linux device this change has no effect on you. In that case Synology is just sharing files and it doesn’t know or care what kind of files they are.

    This change is mostly for people who were using the Synology videos app to stream videos. I assume Plex is much more common on Synology and I don’t believe anything changed with Plex’s h265 support.

    If you were using the built in Synology videos app and have objections to Plex give Jellyfin a try. It should handle h265 and doesn’t require a purchase like Plex does to unlock features like mobile apps.

    Linux isn’t dropping any codecs and should be able to handle almost any media you throw at it. Codec support depends on what app you are using, and most Linux apps use ffmpeg to do that decoding. As far as I know Debian hasn’t dropped support for h265, but even if they did you could always compile your own ffmpeg libraries with it re-enabled.

    How can I most easily search my NAS for files needing the removed codecs

    The mediainfo command is one of the easiest ways to do this on the command line. It can tell you what video/audio codecs are used in a file.

    With Linux and Synology DSM both dropping codecs, I am considering just taking the storage hit to convert to h.264 or another format. What would you recommend?

    To answer this you need to know the least common denominator of supported codecs on everything you want to play back on. If you are only worried about playing this back on your Linux machine with your 1080s then you fully support h265 already and you should not convert anything. Any conversion between codecs is lossy so it is best to leave them as they are or else you will lose quality.

    If you have other hardware that can’t support h265, h264 is probably the next best. Almost any hardware in the last 15 years should easily handle h264.

    When it comes to thumbnails for a remote filesystem like this are they generated and stored on my PC or will the PC save them to the folder on the NAS where other programs could use them.

    Yes they are generated locally, and Dolphin stores them in ~/.cache/thumbnails on your local system.