• 5 Posts
  • 422 Comments
Joined 2 years ago
cake
Cake day: June 18th, 2023

help-circle

  • Aren’t neural networks AI by definition, if we take the academic definition into account?

    I know that thermostat is an AI, because it reacts to a stimuli (current temperature) and makes an action (starts heating) basted on it’s state. Which is the formal AI definition.

    Wait. That actually means transformers are not AI by definition. Hmm, I need to look into it some more.

    EDIT: I was confusing things, that’s the definition of AI Agent. I’ll go research the AI definition some more :D


  • You’re right, I used a wrong word there. It wasn’t science, more like public perception maybe? I’d consider lack of research as a part of science, though.

    I’m not sure what better word would fit there instead. I wouldn’t say it’s the fault of marketing, I’m giving them the benefit of the doubt that they thought it’s actually healthier to use this kind of filter.

    The comparison that sparks to my mind are vapes. There’s AFAIK lack of research that can tell us anything about long term issues, but a lot of people consider it as healthier. But in this case, common sense is also not correct - because it kind of makes sense that it probably isn’t, and it’s just marketing.

    But in the case of an asbestos filter, I can see why people (and common sense at the time) would asume that it helps.

    So, I guess common sense is the word that I should’ve used, because that’s what was wrong at the time.


  • While I get where are you comming from, and I’m also not fan of smoking, isn’t asbestos extremely worse?

    I remember my friend had a roof over his summer house that was using asbestos, and it was extreme problem. Like, you can’t even take it down without investing heavily into protection, or hiring a company that specializes in it’s diasposal, because it’s just that much toxic to handle.



  • I second this. I only started slowly switching to nvim few months ago, and I already can feel slightly annoyed when I have to take off my hands of the keyboard to reach for a mouse, or when I’m editting a text in i.e a browser, want to make an edit few words back, and I have to spam keys like a madman instead of just jumping where I need to be.

    It’s addicting and extremely comfortable, having a good keyboard navigation controls.

    I really need to look into tiled window managers and a browser.


  • I do also like all the alt and ctrl combinations with arrow keys to move lines, blocks and jump over words.

    That’s what I love the most about VIM, that it has dozen little tricks like these. Need to jump over a word? Jump to next occurance of letter L? Jump five words? Jump to second parameter of a function definition? Jump to matching bracket? There’s a motion for all of that, and more. Including “go to definition” or “go to references”, if you set up your vim correctly.

    I don’t even know where to start to make vim or neovim do all that.

    What I did was simply install IdeaVIM into my Rider, so I can start learning the motions while also keep the features of the IDE I’m used to, but also more importantly installed LazyVim, which is a pre-made config for nvim that can do most of that by default, or has a simple addon menu (LazyExtras) that automatically download and install plugins relevant for a language you are working on. I.e I need to work in Zig, I just open LazyExtras menu, find zig-lang, and it install LSP, debugger, linter, etc that’s specific for that language.



  • Square Enix actually has a pretty sick automated QA already. There’s a cool talk about how they did that for FFVII remake in GDC vault, and I highly recommend watching it, if you’re at all interested in QA.

    It has nothing to do with AI, it’s just plain old automation, but they solve most of the issues you get with making automated tests in non-discrete 3D playspace and they do that in a pretty solid way. It’s definitely something I’d love to have implemented in the games I’m working on, as someone who worked in QA and now works in development. Being able to have mostly reliable way how to smoke-test levels for basic gameplay without having to torture QA to run the test-case again is good, and allows QA to focus on something else - but the tools also need oversight, so it’s not really a job lost. In summary - I think the talk is cool tech and worth the watch.

    However, I don’t think AI will help in this regard, and something as unreliable and random as AI models are not a good fit for this job. You want to have deterministic testcases that you can quanitfy, and if something doesn’t match have an actual human to look at why. AI also probably won’t be able to find clever corner-cases and bugs that need human ingenuity.

    Fuck AI, I kind of hope this is just a marketing talk and they are actually just improving the (deterministic) tools they already have (which actually are AI by definition, since they also do level exploration on top of recorded inputs), and they are calling it an “AI” to satisfy investors/management without actually slapping a glorified chat-bot into the tech for no reason.


  • Large companies probably do that anyway.

    Take Blizzard for example. They just released a new patch, where class campaign quests for 8/12 classes do not work. Sure, it’s a remixed version of older expansion, and with all the phasing stuff I can kind of imagine some of the phasing issues being caused by, I don’t know, the player having a weird combination of completed stuff that’s hard to properly catch in testing, since there’s quite a lot of variables.

    But the fact that one of the class quests requires crafted items to be completed, while crafting isn’t available by design in the Remix, there’s just no excuse. They either just don’t give a fuck about an issue that’s literally a progression blocker with 100% repro rate (while also being pretty easy to fix), or no one ever tested it even once. And it’s not just some random sidequest, it’s literally the main class campaign, one of the main features of the expansion.

    As someone who worked in QA and gamedev, I can’t imagine how could something as obvious as this ever get approved for release. That’s something you catch immediately. Hell, you don’t even have to play through it to realize that this might be a problem.




  • Definitely, but the issue is that even the security companies that actually do the assesments also seem to be heavily transitioning towards AI.

    To be fair, in some cases, ML is actually really good (i.e in EDRs. Bypassing a ML-trained EDR is really annoying, since you can’t easily see what was it that triggered the detection, and that’s good), and that will carry most of the prevention and compensate for the vulnerable and buggy software. A good EDR and WAF can stop a lot. That is, assuming you can afford such an EDR, AV won’t do shit - but unless we get another Wannacry, no-one cares that a few dozen of people got hacked through random game/app, “it’s probably their fault for installing random crap anyway”.

    I’ve also already seen a lot of people either writing reports with, or building whole tools that run “agentic penetration tests”. So, instead of a Nessus scan, or an actual Red Teamer building a scenario themselves, you get a LLM to write and decide a random course of action, and they just trust the results.

    Most of the cybersecurity SaaS corporates didn’t care about the quality of the work before, just like the companies that are actually getting the services didn’t care (but had to check a checkbox). There’s not really an incentive for them to do so, worst case you get into a finger-pointing scenario (“We did have it pentested” -> “But our contract says that we can’t 100% find everything, and this wasn’t found because XYZ… Here’s a report with our methodology that we did everything right”), or the modern equivalent of “It was the AI’s fault”, maybe get a slap on the wrist, but I think that it will not get more important, but way, way more depressing than it already was three years ago.

    I’d estimate it will take around a decade of unusable software and dozens of extremely major security breaches before any of the large corporations (on any side) concedes that AI was really, really stupid idea. And at that time they’ll probably also realize that they can just get away with buggy vulnerable software and not care, since breaches will be pretty common place, and probably won’t affect larger companies with good (and expensive) frontline mitigation tools.


  • I have worked as a pentester and eventually a Red Team lead before leaving foe gamedev, and oh god this is so horrifiying to read.

    The state of the industry was alredy extremely depressing, which is why I left. Even without all of this AI craze, the fact that I was able to get from a junior to Red Team Lead, in a corporation with hundreds of employees, in a span of 4 years is already fucked up, solely because Red Teaming was starting to be a buzz word, and I had passion for the field and for Shadowrun while also being good at presentations that customers liked.

    When I got into the team, the “inhouse custom malware” was a web server with a script that pools it for commands to run with cmd.exe. It had a pretty involved custom obfuscation, but it took me lile two engagements and the guy responsible for it to leave before I even (during my own research) found out that WinAPI is a thing, and that you actually should run stuff from memory and why. And I was just a junior at the time, and this “revelation” got me eventually a unofficial RT Lead position, with 2 MDs per month for learning and internal development, rest had to be on engagements.

    And even then, we were able to do kind of OK in engagements, because the customers didn’t know and also didn’t care. I was always able to come up with “lessons learned”, and we always found out some glaring sec policy issues, even with limited tools, but the thing is - they still did not care. We reported something, and two years ago they still had the same bruteforcable kerberos tickets. It already felt like the industry is just a scam done for appearances, and if it’s now just AIs talking to the AIs then, well, I don’t think much would change.

    But it sucks. I love offensive security, it was really interresting few years of my carreer, but ot was so sad to do, if you wanted to do it well :(




  • For me, the issue isn’t as much that they are forcing the data collection (on some/free people, to be clear).

    I have issues with the way they are spending their development money, that I give them for the product. I don’t care about the AI hype slop, that apparently can’t even get good results (which they outright admit in the blogpost), instead of actually making the core features of the editor better. Everyone knows at this point it’s a hype bubble that will never be usable, and they are grasping at straws.

    I don’t want to pay 200$ a year only for them to add a dumb chatbot and data collection into my IDE, or make the code completion dumber and random instead of actually being deterministic. So I don’t, canceled my subscription and I’m sticking to the perpetual license while slowly switching to nvim. But I can still make fun of them about it. I have been recommending JetBrains products for most of my life, and they have disappointed me with the direction they are going, so I’ll make sure to un-recommend it.


  • The context is that they made a blogpost that’s written in, at least in my opinion, extremely pleading tone. They are basically crying that they can’t make a good AI with public data, and if you please could turn on their new AI data collection that would steal all your code. I’ve seen a few “we will use your data for AI” posts, and this was just unsettling, with the tone in which it was written.

    I can’t really say why, but I find this style of communication pretty unsettling. It does have exactly the same wibe as the picture in the post.

    So, if you pay for their IDEs, nothing changes, but you can opt-in into them using your data for AI training, and they are pleading you do. If you use the free version, it’s opt out and turned on by default.


  • I don’t think it’s misleading, or at leas the point was not to imply that they are forcing the data collection (which they are, for free users, but it is opt-out). The point is that they are actually downright emotionally manipulating in the blogpost. The blogpost in which they announce it, at least in my opinion, is written in exactly the same tone as the picture. They are basically crying that they can’t make a good AI without stealing your private data, pleading you to turn it on.

    I’ve seen a few similar posts of products announcing AI data collection, and this one was the most unsettling, hence the meme.


  • This was one of my biggest issues, but I did manage to succesfully switch to nvim few months ago, by installing ideavim into Rider, vscode-vim into vscode (so I can’t easily escape it when I get lazy), but most importantly - setting LazyVim as my default editor, which has been a lifesaver.

    It has a pretty good LazyExtras interface for easily installing a ton of plugins, almost for every language. You just open the LazyVim menu, select a language you want, and it installs LSPs, debuggers and whatnot you may need for it. It’s probably using the nvim-lspconfig mentioned in other comments, but it has been pretty seamless.

    But any other pre-made nvim config will work, this one is just more approachable than someone’s random plugin list.