We all know what AI is doing to the workforce but that’s no mystery. Has AI actually served you well, or is it all overhyped slop?
The only things it’s good for are kinda evil.
The technology is remarkable, the implementation is lame, the impact is happening too fast for us to adapt, the damage to artists and creatives is to cry for.
Someone posted a meme yesterday comparing it to The One Ring from LOTR and I think it’s spot on.
My thoughts are that basic things such a using a neural network and such to train a video game enemy on how to use terrain is okay, as long as it’s all simulations run on your own hardware/servers. Things like that that won’t take up an oceans worth of water in a day are fine by me. Things like using machine learning to sort through data with at least a couple humans to verify there was no errors? Okay in my book.
It’s when these brain dead companies push generative ai ( genai ) onto us like it’s somehow comparable to sci-fi super ai that solves every problem with 100% accuracy and without massive environmental destruction that I hate, alongside the terrible product. Same with any person willing to use them while also blindly trusting everything it outputs. Normally I give people a baseline amount of respect when I am first introduced to them, but no matter what, my level of respect for someone who willingly uses and trusts genAI is never going to rise unless they give up the genAI and start on the path of regaining their humanity.
I especially wanna see anybody using genAI for anything education related ( besides studying the dangers of it ), bug bounties/open source, and any kind of art reform themselves and go onto the path of regaining their humanity. I know it’ll never happen since those people were brainless before genAI, but a beaver can dream.
I have zero sympothy for those who have gone into psychosisms because of it or have had similar issues caused by it. They lacked the will power and critical thinking to save themselves.
Edit:
I used to be a genAI user but then I took to the path of regaining my humanity, returning to being a person who believes the past, present, and future will be human ( so long as humans exist ) no matter what. It may seem hard and may take a lot of work to avoid genAI slop, but it is very rewarding IMO. I know I am on the right path because I know supporting your fellow man is infinitely more rewarding than letting genAI take away your brain and more importantly your humanity. I believe in a very human world and refuse to believe genAI has any use in it.
This is the largest technology con job right now. NFTs failed, the “metaverse” failed, now a badly trained AI is the “solution to humanity’s problems”. This is just as stupid as religion.
We all know what AI is doing to the workforce
Do we?
https://budgetlab.yale.edu/research/evaluating-impact-ai-labor-market-current-state-affairs
Summary
While anxiety over the effects of AI on today’s labor market is widespread, our data suggests it remains largely speculative. The picture of AI’s impact on the labor market that emerges from our data is one that largely reflects stability, not major disruption at an economy-wide level. While generative AI looks likely to join the ranks of transformative, general purpose technologies, it is too soon to tell how disruptive the technology will be to jobs. The lack of widespread impacts at this early stage is not unlike the pace of change with previous periods of technological disruption. Preregistering areas where we would expect to see the impact and continuing to monitor monthly impacts will help us distinguish rumor from fact.
*The narrative that AI’s are causing job loss is a marketing strategy performed by the AI companies to boost their recognition through fear.
I recently completed a fairly complex implementation training in government for a team of non-technical users, including agents, agentic workflows, some RAG, and small-scale enterprise app deployments.
I find it a very cool technology, but it is dumb yet. When unbounded, AI does some cool stuff. But building for complex workflows, I find, has resulted in a mixed bag of results. Very specific functions, such as mining data patterns, it is not bad at. But add gray area and it kind of takes stabs in the dark, much like a badly defined Web search.
Even our technical teams sell it as a 10-20% increase in efficiency, not a firesale position replacement. And they’re mandated to adopt and distribute it as widely through govt as possible.
In short, I think this is a fair assessment lol AI may replace us one day, but the models are far too new yet
There are some minor tools for small work that have been helpful, but overall it is intrusive slop.
Windows updates keep trying to add back ai.exe and aimgr.DLL to my office folder. Which I delete, because otherwise it randomly hogs CPU and bogs down the computer.
Then every damn app has a new AI panel that is garbage.
I haven’t used Windows 11 in a long time, is it true copilot is in notepad now?
It’s simultaneously awesome and overhyped.
I would add one more adjective to complete the description: terrible. Depending on the situation, sometimes it’s awesome, sometimes it doesn’t live up to the hype, and sometimes it’s downright terrible.
Slop
Over hyped trash.
I think Alan Tudyck get fucked over because Will Smith couldn’t handle not being the most likable character in a movie.
As soon as test audiences said they loved the robot, the cut back Tudyck’s scenes and completely dropped him from promotion and intro credits.
for the average person, it just provides an objectively inferior result to reading the results on google. like I cannot imagine thinking “Wow I really want an incorrect and incomplete response to my query that doesn’t even link me to additional context to fill in the gaps right now”
Did you somehow forget that 90% of the shit on the internet 3 years ago just before AI was just absolute garbage websites copy/pasting from every other website in existence and so full of ads that people couldn’t even find the actual content on the page?
All that garbage is still there, under the slop, under the Reddit and other promoted answers. It’s just another layer on top of what we were actually looking for.
I desperately want human intelligence in machines but llm’s are not that, and the economy is going to plummet to prove it.
Generative fill saves me a bunch of time in Photoshop.
From an engineering perspective: Agents are great if you are building prototypes or are flushing out an area of your stack that is heavily standardized (i.e. the AI has had a lot of examples to learn from, e.g. simple authentication flows, simple database singletons, basic QA automated testing, etc); However, if you are building new stuff or stuff the AI has not trained on, it is absolutley, infuriatingly prone to output garbage.
edit: it’s no secret either, spin up a new project and you’ll be greeted with this
don't trust mebillboard right from the get-go:
It’s OK in some instances where it’s a tool that helps your hands. Once you start outsourcing your head to chatgpt, you’re voluntarily letting your mental faculties rot for the sake of percieved comfort.







