• 15 Posts
  • 987 Comments
Joined 1 year ago
cake
Cake day: August 15th, 2023

help-circle


  • I watched through Day of Honor a couple of times today, but it was kinda choppy for me since I had to work.

    I just want to clarify “give herself up” in that you mean she is willing to become part of the Voyager “collective” and puts aside her need to return to the Borg?

    If my above assumption is correct, then yes. She is growing exponentially personality wise, but there are significant challenges in doing so.

    Personally, I have been around engineers my entire life. Some people I know could rattle on for hours over something like p vs np even if they just learned about it a few hours ago. Put that same person in a complex social environment and they are absolutely clueless. It’s similar to Seven.

    Assuming I didn’t know anything about her timeline after Day of Honor, my guess would have been it would take years for her to learn how to operate in a complex structure like we are accustomed to. Janeway seems bright enough to understand that as well. So yeah, it would be a very long time before she could make the kinds of decisions we take for granted and Janeway would have to do that for her like a parent.

    Fast forward a bit to Picard, you can see how long it took for her character to develop into something that didn’t resemble a robot. (I am willfully excluding some later episodes of Voyager that were kind of odd, btw.)








  • It was totally fine. Borg implants or not, she was still human. She also didn’t have a choice about becoming Borg at such a young age. When her connection was cut with the collective, she basically became a child again making her Janeway’s responsibility. (That was close to Janeway’s logic I believe, and I agree with it. It was a human decision for another human who was incapable of making decisions.)

    The biggest thing is that Seven has already signed a contract with UPN, so she was kinda stuck for a few episodes anyway. Janeway knew this, so after thinking about it over a 50 gallon drum of coffee and a few packs of menthol Kools, she decided to just run with it and make it dramatic. (The Borg attorneys failed to overturn the terms of the contract even after several weeks of absolutely phenomenal work.)


  • Fake or outdated info, actually. While this is a small tangent, I make it a habit to review basic, introductory information on a regular basis. (For example, I’ll still watch the occasional 3D printer 101 guide even though I could probably build one from scratch while blindfolded.)

    I have been in IT for a very long time and have branched out into other engineering fields over the years. What I have found, unsurprisingly, is that methods and theories can get outdated quick. So, regularly reviewing things I consider “engineering gospel” is just healthy practice.

    For the topic at hand, it doesn’t take much misinformation (or outdated information) to morph into something absolutely fake, or at best, completely wrong. It takes work to separate fact from fiction and many people are too lazy to look past internet pictures with words, or 15 second video clips. (It’s also hard to break out of believing unverified information “just because that’s the way is”.)



  • All good! It’s the same situation as I described and I see that increasing temps did help. It’s good to do a temperature tower test for quality and also a full speed test after that. After temperature calibration, print a square that is only 2 or 3 bottom layers that covers the entire bed at full speed or faster. (It’s essentially a combined adhesion/leveling/extrusion volume/z offset test, but you need to understand what you are looking at to see the issues separately.)

    If you have extrusion problems, the layer line will start strong from the corners, get thin during the acceleration and may thicken up again at the bottom of the deceleration curve. A tiny bit of line width variation is normal, but full line separation needs attention.

    Just be aware if you get caught in a loop of needing to keep bumping up temperatures as that starts to get into thermistor, heating element or even some mechanical issues/problems.


  • I am curious as to why they would offload any AI tasks to another chip? I just did a super quick search for upscaling models on GitHub (https://github.com/marcan/cl-waifu2x/tree/master/models) and they are tiny as far as AI models go.

    Its the rendering bit that takes all the complex maths, and if that is reduced, that would leave plenty of room for running a baby AI. Granted, the method I linked to was only doing 29k pixels per second, but they said they weren’t GPU optimized. (FSR4 is going to be fully GPU optimized, I am sure of it.)

    If the rendered image is only 85% of a 4k image, that’s ~1.2 million pixels that need to be computed and it still seems plausible to keep everything on the GPU.

    With all of that blurted out, is FSR4 AI going to be offloaded to something else? It seems like there would be a significant technical challenges in creating another data bus that would also have to sync with memory and the GPU for offloading AI compute at speeds that didn’t risk create additional lag. (I am just hypothesizing, btw.)


  • I suppose you are correct. If the bit isn’t structural, it doesn’t need to pass any test for microcracks. If it is structural and it passes testing, YOLO that shit.

    It’s just the core frames that need serious attention though. I don’t think I have been around a single aircraft that wasn’t constantly bleeding some kind of fluid, so everything else not related to getting the thing in the air and keeping it from completely disintegrating while in flight is mostly optional. (I am joking, but not really. Airplanes hold the weird dichotomy of being strangely robust and extremely fragile at the same time.)


  • And there are significant technology differences. The new upgrade will be the B-52J or K.

    Proper aircraft maintenance cycles are intense, so it would surprise me if any of airframes we use now have 1952 original parts. Aircraft are subject to lots of vibration and the aluminum in B-52s will eventually stress-crack because of it. (It wouldn’t surprise me if composites were added in many places instead of aluminum replacements, but that is just speculation.)

    Also during those maintenance cycles, it’s much easier to do systems upgrades since the aircraft is basically torn down to its frame anyway.

    It’s the same design to what we had in 1952, but they ain’t the same aircraft, philosophically speaking.



  • 185C is cold for PLA. It may work for slow prints, but my personal minimum has always been around 200C and my normal print temperature is usually at 215C.

    Long extrusions are probably sucking out all the heat from the nozzle and it’s temporarily jamming until the filament can heat up again.

    Think of the hotend as a reservoir for heat. For long extrusions, it will drain really fast. Once the hotend isn’t printing for a quick second, it will fill back up really fast. At 185C, you are trying to print without a heat reservoir. I mean, it’ll work, but not during intense or extended extrusions.


  • For my applications, quantity is better. Since I do CAD work in addition to 3D scanning with only occasional gaming, I need the capacity.

    While I am 3D scanning, I can use in upwards of 30GB of RAM (or more) in one session. CAD work may be just as intensive in the first stages of processing those files. However, I wouldn’t consider that “typical” use for someone.

    For what you describe, I doubt you will see much of a performance hit unless you are benchmarking and being super picky about the scores. My immediate answer for you is quantity over speed, but you need to test and work with both configurations yourself.

    I don’t think I saw anyone mention that under-clocked RAM may be unstable, in some circumstances. After you get the new setup booting with additional RAM, do some stress tests with Memtest86 and Prime95. If those are unstable, play with the memory clocks and timings a bit to find a stable zone. (Toying with memory speeds and timings can get complicated quick, btw. Learn what timings mean first before you adjust them as clock speed isn’t everything.)


  • It seems like it would be extremely fast to me. Take a 50x50 block of pixels and expand those across a 100x100 pixel grid leaving blank pixels were you have missing data. If a blank pixel is surrounded by blue pixels, the probability of the missing pixel being blue is fairly high, I would assume.

    That is a problem that is perfect for AI, actually. There is an actual algorithm that can be used for upscaling, but at its core, its likely boiled down to a single function and AI’s are excellent for replicating the output of basic functions. It’s not a perfect result, but it’s tolerable.

    If this example is correct or not for FSR, I have no clue. However, having AI shit out data based on a probability is mostly what they do.