Homepage->Blog index
The real trap of generative models: Cloud dependence
Date written: 2025/05/13

This is a direct follow-up to yesterday's article, in which I want to discuss a looming and much more devastating long-term phenomenon that has ever increasing chances of turning all of us into addicts and slaves to big tech, and that is the direct correlation between the increasing usage of generative, large language, and other machine learning models, and the dependence on the associated "cloud" services that these same big tech firms "generously" provide to us all... with a price.

What is that price? Well, at first it seems like the usual trappings that we first got reeled into during the "Big Data" age of the 2010s, which is to say that we can use these services (partially or fully) at no upfront cost, with the catch that the data we send to them is to be used in algorithms and potentially sold to unknown third parties. I have no doubt in my mind that this is still being put into practice, with companies banking on the bet that we appreciate the conveniences so much to the point that we'd be willing to toss aside any privacy and similar concerns to the wayside. I can't say the bet isn't paying off, sadly...

This is not where it ends, though. Much like how the oh-so-familiar big tech collective, for whom I really had to grit my teeth and not use a particular 6 letter acronym this time, managed to utterly engulf the enterprise sphere with their "all-encompassing" solutions like AWS, Azure, Workspace etc., and basically kill off the concept of on-premises deployments + raise costs to unsustainable levels, both the new and old blood firms are attempting to apply this same model across the board with these machine learning models as carrots on a stick. It's almost like with drugs or one's first visit to a betting shop: You're given a taste/sample for pennies or no cost at all, you get addicted, and now they keep charging you for your addiction on the regular.

I am not underestimating the addictive properties whatsoever. All of us have seen at least one person who can't seem to do almost anything without prompting some kind of ML model, whether it's for code, scripting, illustration, even creative writing, and the number of affected just keeps going up. Hell, I'd be lying if I said I didn't fall into this vice at least a couple of times, namely for anything that required writing sophisticated low-level assembly, or god forbid 3D graphics (and all the math that comes with it).

Just like with ciggies, dope, alcohol, and excessive gaming, once you taste the drug it's very hard to toss it away, try as we all might to tone it down. Big tech knows this, and for years they've been mastering several formulas to getting us all addicted on something. First it was social networking services and their "engagement" metrics, now it's overuse of large language and generative models so that we'd eventually cough up the dough.

"But what about local machine learning models? Nobody says we have to rely on these services!", you might say. Your point would be fair if the reality was that most people do not want to bother for starters, and even if they did they would eventually hit a brick wall thanks to the fact that the average person simply cannot afford the extreme amount of processing, memory, and especially storage that would be required to get an equivalent service at home, finanically or physically. Don't get me wrong, I do think that the ability to reasonably self-host at least some types of machine learning models is nice (even if the outputs aren't exactly great, like with RVC), but it is simply not a realistic scenario, even in the most ideal conditions equipment-wise.

This is all to say that the end-game is that each and every one of us ends up leading a very difficult, if not impossible life of ML abstinence, because it would become an unavoidable part of every day life that we'd have no choice but to pay up for on the regular. It's not like they haven't already succeeded at that with social networking services, as well as Internet access in general. Ultimately, while all these squabbles of "ethical" use of generative and large language models are not something to be ignored, they're miniscule and, in a way, a smoke-screen for the much more ominous reality that awaits us if these trends continue.

What can we all do about it? I'm afraid to say that we cannot have any significant impact. It didn't save Web 1.0 from becoming a "relic of the past", neither will it do so here. Not enough people care, simple as that.

Have a nice day...