• dreamkeeper@literature.cafe
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    14 hours ago

    If this is possible then your AI workflows are catastrophically broken. Even my dumbass company knows AI needs human supervision at all times.

    Reddit and lemmy are so extreme on this topic it’s impossible to express a nuanced opinion on the issue. AI is an undeniably powerful tool for any good programmer, but it needs to be used properly.

    People being this irresponsible with it must work on software where there are no legal consequences if it breaks. As brainwashed as my company is on AI they would never allow us to create a process that releases unreviewed code.

    • criss_cross@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 hours ago

      Oh they are lol. Our company was full steam on it and is just now pumping the brakes as they’ve seen the chaos.

      Don’t get me wrong. I think Gen AI can be, gasp, useful! It’s great in small pockets where you can handhold it and verify output. It’s great for cut through the noise that google and others have failed to address. It’s good at summarizing text.

      I’m not so high on it being this massive reckoning that’s going to replace people. It’s just not built for that. Text prediction can only go so far and that’s all GenAI is.