I’ll introduce you to the concept of WAF, Wife Acceptance Factor.
Basically, all smart IoT devices MUST default back to dumb behaviour in an expected manner. All MITM systems must either fail gracefully, fall back simply, or be robust enough to not fall over.
The WAF on my household tech is pretty high. That includes Plex.
I have in house dual/redundant DNS, and my Plex is nearly 100% 24/7/365 on old server hardware. Our living space is far enough away from the servers that the noise isn’t really a problem, and I can break most of what I have installed/setup and internet continues to work because of the independent and redundant DNS. All of my homelab domains are just a stub zone in my main DNS, so everything keeps working if something dies or stops working.
I use Jellyfin instead of Plex, and it runs on my old PC, which sits next to my regular PC. I’d like to move it, but it’s a bit too big to fit anywhere conveniently.
The WAF is teetering on a knife’s edge. I have been spending so much time getting it set up and adding content that I haven’t cleaned up the content much. I need to go and reorganize things to put her workout videos in a separate spot because they’re very hard to find. If I can manage to get everything working well, she’ll probably let me finally cancel our Netflix and Disney+ subscriptions, provided I top up our content a bit more.
I have yet to mess with DNS. I’d really like to give our Jellyfin a DNS entry, but I’d also really like it to be routed internally when on our network so we don’t take a big perf hit. Doing that means I need to run a custom DNS on our network, so I’ve set up a second wifi network to play around with. But hopefully in the next month or so we’ll have a nice domain, like “media.mydomain.com” or something, which would get routed internally when on wifi and still have TLS working properly.
These kinds of split DNS routing issues are something I’ve struggled with for a while. From my experience, you have basically two options, and depending on your specific situation only one might be viable.
The first option, which may or may not be available to you, entirely relies on what your router can do. Bluntly, if you use the ISP provided router, you’re probably SOL, if not, you have a chance. Higher end (and/or enterprise class) routers and firewalls generally have sufficient features with a few exceptions. The feature you need to use is called hairpin NAT, though, it will pretty much never be called that in your NAT settings, so you’ll need to Google your router and the term “hairpin NAT” to figure out if it can be done and how to do it. To describe what it is, let’s start with basic port forwarding and adapt from there. I think most people know how port forwarding works: a connection to the external (or WAN) connection on a port is forwarded to an internal IP and port. Hairpin NAT is the same but from inside (the LAN port) basically if a connection from the LAN is destined for the WAN interface IP address, it will forward the connection to an internal (LAN) IP and port. This works alongside regular port forwarding, not instead of it.
If your router/firewall doesn’t support hairpin NAT, you’re going to be limited to plan B, DNS.
With bifurcated DNS, you’re going to have some frustrations if anything changes, so like with all of your port forwards, you’ll want to lock down the IP of your target system. With port forwards, it’s bothersome to update, but not unreasonable. With DNS, it’s really not fun. It’s just that much more inconvenient, since you now need to update port forwards for external connections you need to update DNS too. Not great.
So how do you do this? It’s actually not super hard. As far as I know, you can use pihole (which does not require a raspberry Pi, by the way), or any other DNS server system that tickles your fancy. I use bind, but the actual DNS software isn’t super important, it just needs to support forwarders, and custom entries in the config, which I believe both do. Pihole or similar options can do DNS based ad blocking, I’m not a fan of that, but do what you want.
So the next step is to set up DNS internally. Get your DNS software of choice, and either buy a raspberry Pi to run it (bind is also compatible with the pi), or run virtual machines, or stand up an old PC for it. Install whatever os you feel comfortable running the software on, I always use Linux, but as long as your chosen software runs on the OS, it doesn’t matter much. Give the system a static IP and install everything.
Once setup, if you own a domain, you can set an A-record for your service (in your case jellyfin), say “media.domain.com” pointing to your server for that service internally. Update your global DNS to point media.domain.com to your WAN IP.
For me, I use bind on a raspberry Pi. To make management easier, I also installed webmin, which allows management of the bind configuration on a web interface.
For bonus points, do it all over again and build a second one.
And don’t forget to set up forwarders on your internal DNS so they can resolve internet addresses. Pro tip, use the DNS benchmark tool from GRC.com to find the fastest DNS servers for you.
If you want to go crazy, like me, build a third DNS server for all your internal lab stuff on a different domain, like “homelab.local” (it can be anything), and create a stub zone for it on your primary DNS that points to the lab DNS. That way, any “homelab.local” names, like, media.homelab.local or something, can be setup once on your dedicated homelab DNS server, and the other two will simply point to it via the stub zone.
I always recommend finding fast DNS servers to use internally, and I always recommend that if you’re using internal DNS, you have at least two of them.
Last, but not least, after all of that effort, confirm that your fancy new DNS works (good luck with any troubleshooting you might need to do), and update DHCP to point clients at the internal systems for DNS resolving.
I use a Mikrotik router, so it probably does. I’ll have to check it out. I assume it can do SNI-based routing just like haproxy, but if not, I’ll have to move haproxy to my LAN and just do a TCP tunnel in my VPS.
But yeah, doing this and internal DNS should make for a more robust system, thanks for the breakdown.
I work with this stuff professionally. I personally enjoy mikrotik. Not sure how to hairpin NAT on it off the top of my head, though I’m sure it can be done.
I usually use a business firewall as my gateway. Nothing wrong with mikrotik at all, it’s all personal preference. I think this is the first time I’ve heard of someone using a tik in the wild who isn’t running an ISP.
Yup, I was looking for an inexpensive, enterprise grade router, and 5 port Mikrotik was just the right size and price. I like playing with networking stuff.
The next project is getting a WiFi network with a VPN configured at the router level, as well as a WiFi network with no access to the rest of the network. I use a Ubiquiti AP, so it should be feasible.
I used to manage the network at my last job, a startup, but I’m not in IT, I’m a software engineer who gets into a lot of adjacent stuff.
That sounds familiar. My official job is more system administration, but networking is my one true love… At least in terms of work and interests.
Ubiquiti makes pretty good wifi gear. If I’m not mistaken you’ll need a controller running 24/7 to ensure that roaming and stuff works, but IMO, that should always be the case. When I’m doubt, there’s always the cloud key.
Ubiquiti wireless supports VLANs, so you should be good there, if it’s just going to be another SSID on your WiFi. I help run an ubiquiti network at home, we have two main LANs. My network is comprised mainly of Cisco stuff, but I’m using a sonicwall for my gateway, the other network here is entirely ubiquiti, UDM Pro, unifi PoE switches, unifi access points (about 4 right now, mostly for speed/density, though that network only has about 30 devices on it at any given time). The UDM acts as the network controller/manager. I don’t love it because the routing is not where I’d like it to be (in terms of features and capability). Prime example is that I’ve been pushing into L3 switching and for that net we got an enterprise 48 PoE, which can do 2.5G with PoE+ on all ports, and has a slew of additional features including L3 switching. On my side I have a Cisco catalyst 4948, which is connected to the enterprise 48 on a 10G link, I wanted to use the 10G link for device to device routing. On the Cisco, everything worked like clockwork. On the ubiquiti side, you can define routes, but they’re only added to the controller, which only adds them to the gateway. So traffic from my net to the ubiquiti net goes from station to switch to switch to station, and return traffic from the ubiquiti side goes from station to switch to the UDM, then to my switch, to the target station.
You can manually add the routes to the enterprise 48 by cli/SSH, but as soon as the unit restarts, the config is replaced with the current config on the controller (which doesn’t include the routing information).
I did it this way because my homelab, which everyone uses in some way or another, is hanging off my 4948 on a VLAN with L3 switching. I want to avoid the overhead of having it go through the extra devices and the bandwidth limits of going straight from my gateway to the ubiquiti gateway, and the enterprise 48 just won’t do it. On the control panel there’s no way to set what routes should be installed on which devices, so you’re kind of up a creek without a paddle.
I like ubiquiti, but for anything more advanced than all VLANs being handled by the gateway directly, I wouldn’t recommend it. Since most home users only need VLANs to go to the gateway at most, it’s my go to recommendation for home users. It’s inexpensive (relatively speaking) and it’s fairly easy to manage, all while being quite good at what most users need.
If I were to do it again, I’d skip the enterprise 48. It was much more expensive than the “pro” line, which would have been adequate (no 2.5G on the pro), or even the basic 48 PoE, which only has layer 2 (VLANs). I specifically bought the enterprise so it could do this and simply put, it doesn’t. I can force it to work, but I have to do the routes every time the unit restarts. It’s a huge pain.
If you’re only going to use ubiquiti for wifi though, have at it. It’s quite good. A bit basic IMO, but I’m used to Cisco aeronet, which is above most people’s heads with the options you can set. Ubiquiti is a good balance.
I kinda feel like old server hardware is key here. I have pretty much my whole lab running on an old R730 I put a bunch of ECC RAM, disks, and a transcode GPU into and it’s been essentially flawless for like 2 years. Plus it has an IPMI which I don’t think I could live without now. It replaced a setup that would always give me issues which consisted of a bunch of optiplexes, and white boxes. I still hack on pi’s cuz it’s fun, but all the core stuff is surplus enterprise.
I recently upgraded my lab, it used to be an R710, and a pair of nodes from a c6100. Because that stuff was so old, I managed to cram all the VMs I was running onto a single FC630 node on a shiny, new (to me) Dell FX2s.
I really want to get a transcoding GPU, but passing out through to a VM has historically been infeasible, and now, it’s complicated at the very least… At least for Nvidia GPUs.
I’ve been looking at the Intel discrete GPU lines for the task recently. I’d sure like to grab a flex 140, but looking at the prices right now, ha, that’s not happening anytime soon. With the FX2s I can only install single-slot half height cards, so options are limited. Front runners right now are the Nvidia P4 and T4, and the Intel ARC A380, with a modded cooler so it’s single slot. My only other option is to find some way to use the existing PCIe interfaces to attach an external GPU, but eGPU enclosures are pretty expensive too and most don’t even come with a GPU.
I’m trying to stay away from thunderbolt, so if I go external, I’ll probably look at either Oculink, or something similar. TB is just way too expensive IMO. I looked into it and the whole setup, a TB PCIe card, TB eGPU enclosure, and a GPU is something like 40-50% more expensive than using a different solution. I’d prefer everything just fits in the server chassis, but then I’m banging my head off of Nvidia or modding Intel ARC cards. None of these are very appealing.
So CPU transcoding for now. I store all my media in 720p AVC/AAC using MP4 as a container, so most streams are direct, and I did that very much on purpose.
Nice! That seems like a sweet little server. Direct play is for sure ideal, plus if 720p is good enough quality for you I’m sure it saves a bunch on disk space.
My set up is an A380 passed to an Ubuntu 24.04 VM on a TrueNAS CORE host. It was really simple to set up PCIe pass through, TrueNAS let’s you do everything you need though the web GUI and h.264 and HEVC transcoding worked right out of the box in Jellyfin with the Jellyfin flavored FFMPEG if I recall. It also supports AV1 encoding but I haven’t tried that out. It handles like a dozen 4k transcodes at once, they’re capable little cards. I think ASRock makes a slot powered low profile 1 slot version.
I’m familiar with the sparkle arc cards, not so much asrock. I’ll check it out.
My main motivation for 720p is a combination of me not caring about 1080/4k, space, and bandwidth. I only really get 10mbps of upload where I am. It’s basically impossible to get anything faster, so if one person tries to stream 4k, not only are they going to have a bad time, but also, nobody else is going to be watching anything.
If I had 4k/1080 content, most of the time the server would need to transcode it anyways for most people, which I’d have to pay for via my electricity bill, and I’d be footing the bill for more disk storage to keep it around. On top of that, live transcoding is generally not as good as a 2 pass vbr when running it through handbrake or something.
There’s obviously more to it overall, but I’ll leave it at that.
Plex has support for hardware transcoding, but the CPU in the server where my VM is, doesn’t have a built in GPU, so I have to add one in. It’s part of the reason I moved from the c6100 to an FX2s. The “s” variant of the FX2 has PCIe card slots in the back that connect to the hosts. In the case of the c6100, there was space for a PCIe card, but only one, and given the built in 2x1GbE onboard, I’d sooner use those for additional networking. The FX2s has 2x10GbE, so it’s less of a concern to use the PCIe slots for graphics… Also, there’s two slots per half-width blade, which is what I have, so I could add two GPUs per host.
I also want to experiment with 3D accelerated vdi, and cluster hosted gaming (similar to stadia), in house… For that I need a decent graphics card. The only one with a good amount of RAM is the Intel flex 140 and the Nvidia T4. The arc A380 is decent, but 6G of memory is limiting. The flex 140 has 12G IIRC and the T4 has 16G. It seems like a lot until you split up the GPU among a couple of VMs… On the T4 you get either 2x8G VRAM systems, or 3x 5.33G VRAM systems… I’d rather 6G per as a minimum standard. This means that to have two GPU enabled systems with the A380, you’d basically need one card per VM. Even though they’re pretty cheap cards, having 3 hosts (as is the plan) gets expensive pretty fast.
Women are temporary. Enshitification is eternal. Sail the high seas matey. Arrrrr
If you do the whole home server self host thing, you could probably fool most people by changing the skin to a red theme though. I use a custom made php piece of shit for mine but there’s this better one everybody uses, I just can’t remember what it’s called.
Plex is on the Native Synology app. Sonarr, radarr, etc are in containers. The Synology NAS intermittently stops being accessible. I haven’t been able to figure out the problem. I find it impossible to troubleshoot network problems. I think it is my router. Restarting the router seems to fix it. Factory resetting the router didn’t solve the problem.
In summary: my wife is fed up and wants Netflix back.
If this ends up being the place someone is able to offer support, I’ll add some details:
Media fetch project containing Sonarr, Radarr, and Bazarr.
This seemed to work fine before I added the Arr’s. Even after I added Arr’s it worked fine initially, but now for the past few weeks it has been causing constant problems that are solved by restarting the router (once or twice a day).
Synology was terrible for me. Unraid is where it’s at. I don’t really ever mess with it anymore. Also Plex was horrendous for uptime too. I use jellyfin.
Yeah, this is not a U shaped curve. As you learn more and start to implement concepts like fail-safe and redundancy, the chances of everything in your house being broken goes way back down again.
I’ll introduce you to the concept of WAF, Wife Acceptance Factor.
Basically, all smart IoT devices MUST default back to dumb behaviour in an expected manner. All MITM systems must either fail gracefully, fall back simply, or be robust enough to not fall over.
I’ve been trying my very best to get Plex to a high WAF, but it fucks up constantly.
I get this constantly:
The WAF on my household tech is pretty high. That includes Plex.
I have in house dual/redundant DNS, and my Plex is nearly 100% 24/7/365 on old server hardware. Our living space is far enough away from the servers that the noise isn’t really a problem, and I can break most of what I have installed/setup and internet continues to work because of the independent and redundant DNS. All of my homelab domains are just a stub zone in my main DNS, so everything keeps working if something dies or stops working.
I use Jellyfin instead of Plex, and it runs on my old PC, which sits next to my regular PC. I’d like to move it, but it’s a bit too big to fit anywhere conveniently.
The WAF is teetering on a knife’s edge. I have been spending so much time getting it set up and adding content that I haven’t cleaned up the content much. I need to go and reorganize things to put her workout videos in a separate spot because they’re very hard to find. If I can manage to get everything working well, she’ll probably let me finally cancel our Netflix and Disney+ subscriptions, provided I top up our content a bit more.
I have yet to mess with DNS. I’d really like to give our Jellyfin a DNS entry, but I’d also really like it to be routed internally when on our network so we don’t take a big perf hit. Doing that means I need to run a custom DNS on our network, so I’ve set up a second wifi network to play around with. But hopefully in the next month or so we’ll have a nice domain, like “media.mydomain.com” or something, which would get routed internally when on wifi and still have TLS working properly.
For full WAF compabilty you need a front end where she can add content herself. Like Ombi or Overseer
So far, samba is working. But I’ll check those out too.
These kinds of split DNS routing issues are something I’ve struggled with for a while. From my experience, you have basically two options, and depending on your specific situation only one might be viable.
The first option, which may or may not be available to you, entirely relies on what your router can do. Bluntly, if you use the ISP provided router, you’re probably SOL, if not, you have a chance. Higher end (and/or enterprise class) routers and firewalls generally have sufficient features with a few exceptions. The feature you need to use is called hairpin NAT, though, it will pretty much never be called that in your NAT settings, so you’ll need to Google your router and the term “hairpin NAT” to figure out if it can be done and how to do it. To describe what it is, let’s start with basic port forwarding and adapt from there. I think most people know how port forwarding works: a connection to the external (or WAN) connection on a port is forwarded to an internal IP and port. Hairpin NAT is the same but from inside (the LAN port) basically if a connection from the LAN is destined for the WAN interface IP address, it will forward the connection to an internal (LAN) IP and port. This works alongside regular port forwarding, not instead of it.
If your router/firewall doesn’t support hairpin NAT, you’re going to be limited to plan B, DNS.
With bifurcated DNS, you’re going to have some frustrations if anything changes, so like with all of your port forwards, you’ll want to lock down the IP of your target system. With port forwards, it’s bothersome to update, but not unreasonable. With DNS, it’s really not fun. It’s just that much more inconvenient, since you now need to update port forwards for external connections you need to update DNS too. Not great.
So how do you do this? It’s actually not super hard. As far as I know, you can use pihole (which does not require a raspberry Pi, by the way), or any other DNS server system that tickles your fancy. I use bind, but the actual DNS software isn’t super important, it just needs to support forwarders, and custom entries in the config, which I believe both do. Pihole or similar options can do DNS based ad blocking, I’m not a fan of that, but do what you want.
So the next step is to set up DNS internally. Get your DNS software of choice, and either buy a raspberry Pi to run it (bind is also compatible with the pi), or run virtual machines, or stand up an old PC for it. Install whatever os you feel comfortable running the software on, I always use Linux, but as long as your chosen software runs on the OS, it doesn’t matter much. Give the system a static IP and install everything.
Once setup, if you own a domain, you can set an A-record for your service (in your case jellyfin), say “media.domain.com” pointing to your server for that service internally. Update your global DNS to point media.domain.com to your WAN IP.
For me, I use bind on a raspberry Pi. To make management easier, I also installed webmin, which allows management of the bind configuration on a web interface.
For bonus points, do it all over again and build a second one.
And don’t forget to set up forwarders on your internal DNS so they can resolve internet addresses. Pro tip, use the DNS benchmark tool from GRC.com to find the fastest DNS servers for you.
If you want to go crazy, like me, build a third DNS server for all your internal lab stuff on a different domain, like “homelab.local” (it can be anything), and create a stub zone for it on your primary DNS that points to the lab DNS. That way, any “homelab.local” names, like, media.homelab.local or something, can be setup once on your dedicated homelab DNS server, and the other two will simply point to it via the stub zone.
I always recommend finding fast DNS servers to use internally, and I always recommend that if you’re using internal DNS, you have at least two of them.
Last, but not least, after all of that effort, confirm that your fancy new DNS works (good luck with any troubleshooting you might need to do), and update DHCP to point clients at the internal systems for DNS resolving.
Easy, simple, barely an inconvenience, right?
I use a Mikrotik router, so it probably does. I’ll have to check it out. I assume it can do SNI-based routing just like haproxy, but if not, I’ll have to move haproxy to my LAN and just do a TCP tunnel in my VPS.
But yeah, doing this and internal DNS should make for a more robust system, thanks for the breakdown.
I work with this stuff professionally. I personally enjoy mikrotik. Not sure how to hairpin NAT on it off the top of my head, though I’m sure it can be done.
I usually use a business firewall as my gateway. Nothing wrong with mikrotik at all, it’s all personal preference. I think this is the first time I’ve heard of someone using a tik in the wild who isn’t running an ISP.
Yup, I was looking for an inexpensive, enterprise grade router, and 5 port Mikrotik was just the right size and price. I like playing with networking stuff.
The next project is getting a WiFi network with a VPN configured at the router level, as well as a WiFi network with no access to the rest of the network. I use a Ubiquiti AP, so it should be feasible.
I used to manage the network at my last job, a startup, but I’m not in IT, I’m a software engineer who gets into a lot of adjacent stuff.
That sounds familiar. My official job is more system administration, but networking is my one true love… At least in terms of work and interests.
Ubiquiti makes pretty good wifi gear. If I’m not mistaken you’ll need a controller running 24/7 to ensure that roaming and stuff works, but IMO, that should always be the case. When I’m doubt, there’s always the cloud key.
Ubiquiti wireless supports VLANs, so you should be good there, if it’s just going to be another SSID on your WiFi. I help run an ubiquiti network at home, we have two main LANs. My network is comprised mainly of Cisco stuff, but I’m using a sonicwall for my gateway, the other network here is entirely ubiquiti, UDM Pro, unifi PoE switches, unifi access points (about 4 right now, mostly for speed/density, though that network only has about 30 devices on it at any given time). The UDM acts as the network controller/manager. I don’t love it because the routing is not where I’d like it to be (in terms of features and capability). Prime example is that I’ve been pushing into L3 switching and for that net we got an enterprise 48 PoE, which can do 2.5G with PoE+ on all ports, and has a slew of additional features including L3 switching. On my side I have a Cisco catalyst 4948, which is connected to the enterprise 48 on a 10G link, I wanted to use the 10G link for device to device routing. On the Cisco, everything worked like clockwork. On the ubiquiti side, you can define routes, but they’re only added to the controller, which only adds them to the gateway. So traffic from my net to the ubiquiti net goes from station to switch to switch to station, and return traffic from the ubiquiti side goes from station to switch to the UDM, then to my switch, to the target station. You can manually add the routes to the enterprise 48 by cli/SSH, but as soon as the unit restarts, the config is replaced with the current config on the controller (which doesn’t include the routing information).
I did it this way because my homelab, which everyone uses in some way or another, is hanging off my 4948 on a VLAN with L3 switching. I want to avoid the overhead of having it go through the extra devices and the bandwidth limits of going straight from my gateway to the ubiquiti gateway, and the enterprise 48 just won’t do it. On the control panel there’s no way to set what routes should be installed on which devices, so you’re kind of up a creek without a paddle.
I like ubiquiti, but for anything more advanced than all VLANs being handled by the gateway directly, I wouldn’t recommend it. Since most home users only need VLANs to go to the gateway at most, it’s my go to recommendation for home users. It’s inexpensive (relatively speaking) and it’s fairly easy to manage, all while being quite good at what most users need.
If I were to do it again, I’d skip the enterprise 48. It was much more expensive than the “pro” line, which would have been adequate (no 2.5G on the pro), or even the basic 48 PoE, which only has layer 2 (VLANs). I specifically bought the enterprise so it could do this and simply put, it doesn’t. I can force it to work, but I have to do the routes every time the unit restarts. It’s a huge pain.
If you’re only going to use ubiquiti for wifi though, have at it. It’s quite good. A bit basic IMO, but I’m used to Cisco aeronet, which is above most people’s heads with the options you can set. Ubiquiti is a good balance.
I kinda feel like old server hardware is key here. I have pretty much my whole lab running on an old R730 I put a bunch of ECC RAM, disks, and a transcode GPU into and it’s been essentially flawless for like 2 years. Plus it has an IPMI which I don’t think I could live without now. It replaced a setup that would always give me issues which consisted of a bunch of optiplexes, and white boxes. I still hack on pi’s cuz it’s fun, but all the core stuff is surplus enterprise.
I recently upgraded my lab, it used to be an R710, and a pair of nodes from a c6100. Because that stuff was so old, I managed to cram all the VMs I was running onto a single FC630 node on a shiny, new (to me) Dell FX2s.
I really want to get a transcoding GPU, but passing out through to a VM has historically been infeasible, and now, it’s complicated at the very least… At least for Nvidia GPUs. I’ve been looking at the Intel discrete GPU lines for the task recently. I’d sure like to grab a flex 140, but looking at the prices right now, ha, that’s not happening anytime soon. With the FX2s I can only install single-slot half height cards, so options are limited. Front runners right now are the Nvidia P4 and T4, and the Intel ARC A380, with a modded cooler so it’s single slot. My only other option is to find some way to use the existing PCIe interfaces to attach an external GPU, but eGPU enclosures are pretty expensive too and most don’t even come with a GPU.
I’m trying to stay away from thunderbolt, so if I go external, I’ll probably look at either Oculink, or something similar. TB is just way too expensive IMO. I looked into it and the whole setup, a TB PCIe card, TB eGPU enclosure, and a GPU is something like 40-50% more expensive than using a different solution. I’d prefer everything just fits in the server chassis, but then I’m banging my head off of Nvidia or modding Intel ARC cards. None of these are very appealing.
So CPU transcoding for now. I store all my media in 720p AVC/AAC using MP4 as a container, so most streams are direct, and I did that very much on purpose.
Nice! That seems like a sweet little server. Direct play is for sure ideal, plus if 720p is good enough quality for you I’m sure it saves a bunch on disk space.
My set up is an A380 passed to an Ubuntu 24.04 VM on a TrueNAS CORE host. It was really simple to set up PCIe pass through, TrueNAS let’s you do everything you need though the web GUI and h.264 and HEVC transcoding worked right out of the box in Jellyfin with the Jellyfin flavored FFMPEG if I recall. It also supports AV1 encoding but I haven’t tried that out. It handles like a dozen 4k transcodes at once, they’re capable little cards. I think ASRock makes a slot powered low profile 1 slot version.
I’m familiar with the sparkle arc cards, not so much asrock. I’ll check it out.
My main motivation for 720p is a combination of me not caring about 1080/4k, space, and bandwidth. I only really get 10mbps of upload where I am. It’s basically impossible to get anything faster, so if one person tries to stream 4k, not only are they going to have a bad time, but also, nobody else is going to be watching anything.
If I had 4k/1080 content, most of the time the server would need to transcode it anyways for most people, which I’d have to pay for via my electricity bill, and I’d be footing the bill for more disk storage to keep it around. On top of that, live transcoding is generally not as good as a 2 pass vbr when running it through handbrake or something.
There’s obviously more to it overall, but I’ll leave it at that.
Plex has support for hardware transcoding, but the CPU in the server where my VM is, doesn’t have a built in GPU, so I have to add one in. It’s part of the reason I moved from the c6100 to an FX2s. The “s” variant of the FX2 has PCIe card slots in the back that connect to the hosts. In the case of the c6100, there was space for a PCIe card, but only one, and given the built in 2x1GbE onboard, I’d sooner use those for additional networking. The FX2s has 2x10GbE, so it’s less of a concern to use the PCIe slots for graphics… Also, there’s two slots per half-width blade, which is what I have, so I could add two GPUs per host.
I also want to experiment with 3D accelerated vdi, and cluster hosted gaming (similar to stadia), in house… For that I need a decent graphics card. The only one with a good amount of RAM is the Intel flex 140 and the Nvidia T4. The arc A380 is decent, but 6G of memory is limiting. The flex 140 has 12G IIRC and the T4 has 16G. It seems like a lot until you split up the GPU among a couple of VMs… On the T4 you get either 2x8G VRAM systems, or 3x 5.33G VRAM systems… I’d rather 6G per as a minimum standard. This means that to have two GPU enabled systems with the A380, you’d basically need one card per VM. Even though they’re pretty cheap cards, having 3 hosts (as is the plan) gets expensive pretty fast.
My Plex server is also a literal pile of garbage, but I only host on the LAN so I don’t even have to worry about DNS fuckery.
My WAF with radarr+sonarr+kodi is sky high Plus Home Assistant with smart switches and outlets in every room.
I bet your wife is really cool. You know, by the standards of some nerd on the Internet, but I’m guessing I’d think she was cool.
She’s the coolest. She is also lazy, like me, so home automation is right up her alley.
Women are temporary. Enshitification is eternal. Sail the high seas matey. Arrrrr
If you do the whole home server self host thing, you could probably fool most people by changing the skin to a red theme though. I use a custom made php piece of shit for mine but there’s this better one everybody uses, I just can’t remember what it’s called.
As Captain Jack Sparrow put it: “I’m deeply flattered, but my first and only love is the sea.”
F
Hahahahah this
You’re probably using containers
Plex is on the Native Synology app. Sonarr, radarr, etc are in containers. The Synology NAS intermittently stops being accessible. I haven’t been able to figure out the problem. I find it impossible to troubleshoot network problems. I think it is my router. Restarting the router seems to fix it. Factory resetting the router didn’t solve the problem.
In summary: my wife is fed up and wants Netflix back.
If this ends up being the place someone is able to offer support, I’ll add some details:
Equipment:
Virgin Media Hub 3 (set to modem only mode) -> TP-link AX73 | AX5400 -> LAN connection to Synology with static IP set in Synology settings.
Synology has the Plex app (native Synology version from the Plex website). Alongside that I’m running the following Docker containers:
GlueTUN project with surfshark VPN. This runs qBittorrent, Prowlarr and FlareSolverr (I used (this guide)[https://drfrankenstein.co.uk/qbittorrent-with-gluetun-vpn-in-container-manager-on-a-synology-nas/] and (this guide)[https://drfrankenstein.co.uk/prowlarr-and-flaresolverr-via-gluetun-in-container-manager-on-a-synology-nas/])
Media fetch project containing Sonarr, Radarr, and Bazarr.
This seemed to work fine before I added the Arr’s. Even after I added Arr’s it worked fine initially, but now for the past few weeks it has been causing constant problems that are solved by restarting the router (once or twice a day).
Getting Plex and the *arrs off the NAS and onto a NUC really helped speed things up for me. That and moving over to UNIFI for my networking hardware.
Synology was terrible for me. Unraid is where it’s at. I don’t really ever mess with it anymore. Also Plex was horrendous for uptime too. I use jellyfin.
Which router is it?
TP-link AX73 | AX5400
I’ve added details to the comment above in case anyone is able to make sense of my problem.
Yeah, this is not a U shaped curve. As you learn more and start to implement concepts like fail-safe and redundancy, the chances of everything in your house being broken goes way back down again.
The main thing you gotta learn though is stop fucking with it.
Or get a second homelab airgapped away from the first one.
You learn something new every day.