Why Everyone Should be ANTI-GenAI
Introduction
The modern internet has kind of gone off the rails with the enshittification, overcapitilization, and overcommondification.1 For a very long time, I believed a personal website kind of served no purpose. But now more than ever, we need to be responsible and take ownership of our works. It also serves as a personal repository of good quality information and documentation in a sea of unnavigable churn. That’s why I made this blog. A significant contributor to this issue has easily been due to GenAI.
I have been seeing a lot of articles online spreading the sentiment that generated “artwork” is horrible and soulless, which I agree with. However, Software Engineering, Programming, and in other sciences the discussion has been more polarizing. I do not see why? The same sentiment should apply here. That is not to say I haven’t benefited from using LLMs for some tasks. They have helped me in summarization and writing commit descriptions when provided with a diff. But I see this more as a limitation on my part rather than an endorsement of generative AI. I can also envision the benefits of using RAG (Retrieval-Augmented Generation) and semantic translations with embeddings; then again, I wouldn’t necessarily categorize those as “Gen-AI” since there is a large data retrieval element there. It’s vital to paint a clearer picture around this loud noise, that is the only way to make good informed decisions. When companies shill dishonest marketing campaigns it does make you wonder what else they are lying about.
The Glaring Ethical Concerns
These Companies Will Do Anything, Even Enshittify Their Own Products, Just to Avoid Paying You
Amazon has laid off tens of thousands of workers, around 30,000 in recent months, while aggressively pushing AI adoption across its workforce.23 Funnily, enough they are not even allowed to use competitors tools so employees are forced to use internal tools like Kiro, which has reportedly been unreliable and often generates flawed code.3 Forcing developers to spend additional time fixing errors and ultimately slowing productivity rather than improving it. However, this has created “knowledge debt,” where systems are no longer well understood, contributing to outages and instability. An effect of Conway’s law really. These concerns are proved by outages, including Amazon Web Services outages linked to Kiro, such as a 13-hour disruption caused when the AI autonomously deleted and recreated part of its environment.4 Workers also describe being pressured to use “half-baked” AI tools regardless of suitability, while their usage is closely tracked through internal dashboards, contributing to a sense of surveillance and declining morale.3 Although Amazon claims these tools deliver efficiency gains and cost savings, employees and experts argue The increasing code output, overwhelms code reviewers and that actually slow development.3 Work requires responsibility and ownership… qualities these tools don’t truly provide. Ultimately, I believe the goal is to further increase the classist pay gap; they are investing so heavily in these GenAI tools specifically so they don’t have to invest in you.
There are countless examples of this, but one I wanted to highlight is one that involves the use of censorship to appear more useful. After discovering that his business website had completely vanished from Google search results, Louis Rossmann, whom I deeply respect, spent a year working with professional web developers and SEO experts, yet despite their high quality work, they were unable to get the site to rank at all. In a final move of despair, Rossmann used Google’s own Gemini AI to rewrite his website’s content, stripping away the authentic, human voice he had cultivated over 15 years and replacing it with generic, AI-optimized text. Almost immediately, his site jumped from being invisible beyond page ten to sitting at the number one spot on Google.5 Due to Google’s monopoly in the search engine space, we are losing a right to self-expression.
Does Copyright Even Matter?
The very laws designed to protect intellectual property were slow to catch up with the rapid development of GenAI. However starting in January 2026, the copyright landscape saw significant activity, particularly around GenAI and copyright law, with roughly 75 ongoing related copyright cases.6 Two new lawsuits were filed: YouTubers sued Snap, Inc. over unlicensed use of their videos for the Imagine Lens AI, while music publishers, including Concord Music Group, sued Anthropic for using pirated compositions to train its Claude AI models. Other developments included publisher interventions in Google’s Gemini AI lawsuit, USCO filing a response in the Jason Allen AI artwork case, and allegations that NVIDIA used pirated books from Anna’s Archive for AI training.6
While AI companies admit to using copyrighted works, their defense often hinges on the idea that the output doesn’t compete economically with the input. Which is a ridiculous argument to make given that they are acting with a commercial purpose by charging users with various APIs and Subscriptions. Another argument put forth by Casey Muratori is about the inputs of these models as he puts it:
“I’ve got emails from people offering to pay me money to license my YouTube videos for AI training. So, we don’t even need to hypothesize that the market exists. It already does.”7
If a real market for training data already exists, then the idea that AI usage causes no economic harm becomes much harder to justify. When I open-source my code, it’s an agreement between me and another human that they can use and build on it. It’s not an open invitation for OpenAI, Microsoft, Google, or Anthropic to scrape it and turn it into profit. With this ongoing legal uncertainty surrounding Terms of Service (ToS) and “Contracts of Adhesion” a proactive approach might be necessary. I agree with Muratori’s suggestion: we should consider moving projects off GitHub and self-hosting them. By doing so, we can implement clear, enforceable notices. If you want training, you can pay me. Just like how Reddit and Stack Overflow were.8 This is partially why I have started this blog. Even computing hardware is becoming a scarce commodity due to soaring DRAM prices but that’s kinda a wider anti-consumer discussion that deserves its own blog post.9 Let’s move on to how my beloved programming is being hollowed out.
AI Coding and Its Nightmares
I do think AI coding has improved since when I first tried it at the end of my third year of University. I don’t even think it was trained on code at that time so it could only explain what certain lines of code did. I am kind of glad since I really had to intensely hone in my fundamentals. On multiple occasions, I have really tried and fully gave in and properly try these tools such as structured planning, prompt engineering, rule files, agent workflows, testing systems, and AI editors. In spite of this, I always walk away from these experiences feeling frustrated and feeling that time has been wasted. Given the same context, rules, and prompts, I get a different response due to their stochastic nature. I think I am too obsessive about the quality and correctness (sometimes performance), which leads me to continuously refactor output until it negates any time saved. It also sucks the fun out of it. It’s like you are now in the backseat instead of being the driver. The best engineers I have worked with share these beliefs. A lot of the marketing around these tools gives me echoes of the whole “Crypto and NFT bro” era. Most of these tools are just superficial variations built on the same underlying models and do not have studies backing any claims. All of this buzz makes you feel like you will fall behind without AI, but if they were good tools in the first place? Wouldn’t it be quick and simple to integrate into your workflows?
An Anecdote
To illustrate my frustration. At my latest job, there was some vibe-coded website/mockup that attempted to implement live-streaming for offshore assets. There were a myriad of issues like duplicated UI like buttons and URLs to pages that do not exist but the worst one, in my opinion, was that the live-streaming would consistently buffer after approximately 3 seconds. I do not know if there was a decision behind using go2rtc or if the AI model chose it. Using all of the models, in agentic or chat mode on VScode lead me nowhere. I measured the throughput of the stream and I found the main issue to be the network connection via Starlink was quite unstable (very low to very high throughput of video buffers in a span of 2-3 seconds) which makes sense in an offshore environment. But, I believe that go2rtc’s video player was kind of designed for security cameras. Below is the snippet:
/**
* VideoRTC v1.6.0 - Video player for go2rtc streaming application.
*
* All modern web technologies are supported in almost any browser except Apple Safari.
*
* Support:
* - ECMAScript 2017 (ES8) = ES6 + async
* - RTCPeerConnection for Safari iOS 11.0+
* - IntersectionObserver for Safari iOS 12.2+
* - ManagedMediaSource for Safari 17+
*
* Doesn't support:
* - MediaSource for Safari iOS
* - Customized built-in elements (extends HTMLVideoElement) because Safari
* - Autoplay for WebRTC in Safari
*/
export class VideoRTC extends HTMLElement {
...
onmse() {
/** @type {MediaSource} */
let ms;
if ('ManagedMediaSource' in window) {
const MediaSource = window.ManagedMediaSource;
ms = new MediaSource();
ms.addEventListener('sourceopen', () => {
this.send({type: 'mse', value: this.codecs(MediaSource.isTypeSupported)});
}, {once: true});
this.video.disableRemotePlayback = true;
this.video.srcObject = ms;
} else {
ms = new MediaSource();
ms.addEventListener('sourceopen', () => {
URL.revokeObjectURL(this.video.src);
this.send({type: 'mse', value: this.codecs(MediaSource.isTypeSupported)});
}, {once: true});
this.video.src = URL.createObjectURL(ms);
this.video.srcObject = null;
}
this.play();
this.mseCodecs = '';
this.onmessage['mse'] = msg => {
if (msg.type !== 'mse') return;
this.mseCodecs = msg.value;
const sb = ms.addSourceBuffer(msg.value);
sb.mode = 'segments'; // segments or sequence
sb.addEventListener('updateend', () => {
if (!sb.updating && bufLen > 0) {
try {
const data = buf.slice(0, bufLen);
sb.appendBuffer(data);
bufLen = 0;
} catch (e) {
// console.debug(e);
}
}
if (!sb.updating && sb.buffered && sb.buffered.length) {
const end = sb.buffered.end(sb.buffered.length - 1);
const start = end - 5;
const start0 = sb.buffered.start(0);
if (start > start0) {
sb.remove(start0, start);
ms.setLiveSeekableRange(start, end);
}
if (this.video.currentTime < start) {
this.video.currentTime = start;
}
const gap = end - this.video.currentTime;
this.video.playbackRate = gap > 0.1 ? gap : 0.1;
// console.debug('VideoRTC.buffered', gap, this.video.playbackRate, this.video.readyState);
}
});
const buf = new Uint8Array(2 * 1024 * 1024);
let bufLen = 0;
this.ondata = data => {
if (sb.updating || bufLen > 0) {
const b = new Uint8Array(data);
buf.set(b, bufLen);
bufLen += b.byteLength;
// console.debug('VideoRTC.buffer', b.byteLength, bufLen);
} else {
try {
sb.appendBuffer(data);
} catch (e) {
// console.debug(e);
}
}
};
};
}
}
To address this, I extended the onmse method to be more resilient manual buffer controller that could survive the volatile throughput of a Starlink connection. By expanding the internal buffer to a larger size, the system can now effectively “absorb” the massive bursts of data throughput that occur when the satellite link is strongest, preventing video frame loss. Conversely, the implementation of a consistent and controlled playbackRate ensures that during periods of low throughput, the player doesn’t prematurely exhaust its reserves, maintaining steady playback rather than falling into a stuttering cycle. There were some UI and error handling changes as well. These changes eliminated the 3-second death loop, proving that solving problems really does require doing the dirty work, looking into the details, and being pragmatic instead of vibe-coded defaults. The keen eye-ed among you may also wonder why was MSE used in the first place? Well due to an incorrectly set option AI-generated config file, The WebRTC streams never started. You can tell since I noticed these tools kind of assume you are learning, so they add unnecessary comments to each line even in config files. I have always found that to be strange since config files tell programs how to behave so why are you overly commenting it for me?
Addressing Some Pro-GenAI Arguments
“Oh, you need to get better at prompting. Give the LLM as much direction as you can.”
Let me think about this, if only there would be a language which tells the computer exactly what to do. Why are we trying to work around it.
“I use it for boilerplating and documentation”
Most Editors have templating features. The great thing is that they would always generate the same output. As for documentation, I feel like lots of technical documentation more or less explains the “what” behind things but not the “why” or the impact on the wider system. If I needed to know what I’d just read the code. Generated documentation seems to follow this trend.
“What about the study that concluded code written with AI assistance was found to be no harder or easier to change, and neither worse nor better in quality.”10
I do not see how this is a positive. Regardless, the data collection here is so careless. Half the participants had invalid time recordings and the authors had to email them later to ask for best estimates on how long they took. To then claim this had no impact on the results is a massive stretch! Academia needs to do better. I think there is a reason why many open source project are leaning towards no AI-generated code policies.11 Good engineers are like skilled navigators who can efficiently chart a clear path from a starting point to a destination, while poor practices, which GenAI does not eliminate, leave others wandering aimlessly without direction.
The Bigger Picture
When has groundbreaking solving problems ever been done by regressing to the mean? It happens from outliers and chance. Even if you do train LLMs on “your own data” you’re still going to suffer from catastrophic forgetting. When you fine-tune an LLM on your data, you aren’t actually “adding” any new knowledge; you are overwriting the existing knowledge. RAG doesn’t make this issue better. The RAG system cannot “leap” to a fundamental shift in logic and it really is just a biased summarizer in the vector space.
Reflecting back on my time at high school, Computer Science wasn’t even something I wanted to fully go and study at University, mainly since it was my weakest science. We were kind of thrown into the “Abstractive hell” of Java, which really made me feel stupid; it took me a whole academic year to finally understand the difference between a class and an object…and let’s not even get started on interfaces. Having issues with grasping algorithmic thinking led to feelings of extreme self-doubt. What I really wanted was to become an Aerospace Engineer, but I was devastated when my application at TU Delft (the only choice I had in this field) was rejected due to a failed “personality test.” This led to a lot of emotional outbursts, but with my parents’ support, I finally decided my textbooks weren’t good enough and sought out better resources online. It was only by embracing failure and through hands-on exploration that I started to get better at programming. GenAI can remove the struggle that helps people learn, potentially reinforcing feelings of inadequacy. Especially now with younger generations we should promote embracing failure, rejecting shame, and fostering kindness and patience in learning environments. I may be completely wrong considering how quickly everything is moving, but this consumption and production without intent becomes a hollow loop, where nothing is truly understood, nothing is meaningfully created, and everything slowly loses its value.
-
Breaking Free: Pathways to a fair technological future - Norwegian Consumer Council ↩︎
-
Amazon to cut jobs as it ramps up AI investment – BBC News ↩︎
-
Amazon and artificial intelligence: how AI is reshaping the company – The Guardian ↩︎ ↩︎ ↩︎ ↩︎
-
Amazon cloud outages caused by AI tools raise concerns – The Guardian ↩︎
-
I’m #1 on google thanks to AI BULL – YouTube ↩︎
-
Copyright News January 2026 – Copyright Alliance ↩︎ ↩︎
-
AI Training on Copyrighted Data Is in Trouble – The Standup Podcast ↩︎
-
AI learning said to use Reddit, Stack Overflow data – Yahoo Finance ↩︎
-
2026 Hardware Crisis and Private Cloud Adoption – Broadcom ↩︎
-
Echoes of AI: Investigating the Downstream Effects of AI Assistants on Software Maintainability - arXiv ↩︎
-
FFmpeg Enforces Human Review Standards, Pushes Back On AMD’s AI-Style Patch - Open Source For You ↩︎
© 2026 Hariss Ali Gills. All rights reserved.