Did you know you can install HEASOFT's XSPEC model library for #xray #astronomy in seconds with #python pip?
No? Well that's because you couldn't. But now you can:
#Tag
Did you know you can install HEASOFT's XSPEC model library for #xray #astronomy in seconds with #python pip?
No? Well that's because you couldn't. But now you can:
@mro Hi, thanks for the sharp analogy! The X-ray/asbestos comparison is a classic way to view the risks of new tech.
However, my argument for “socialization” stems from the belief that LLMs are a significant productive force. If we view them as “asbestos,” the logical step is a total ban. But if we see them as a “utility” (like electricity), the current corporate monopoly is the real poison.
In a historical materialism framework, the “toxic” side effects we see today—like reckless resource consumption or data exploitation—are often driven by the capitalist mode of production (profit-first scaling). By “liberating” or socializing the material basis of AI, we gain the democratic power to regulate its use and minimize those downsides, turning it into a true public good rather than a corporate hazard.
Hi @hongminhee,
#asbestos wasn't banned from the beginning, nor was #Xray. Time told. So it may with #LLMs.
As to how productive they are - the data basis so far is too narrow to tell IMO. Some say so, some other. Recently a study claimed devs feel +20% but in fact are -20%.
I have the notion the L im LLM fits the B in Big IT quite well.
We have to re-focus from the means to the ends. What goals do we accomplish, not how much software do we engage.
I've been increasingly concerned about the corporate monopoly over frontier LLMs. While many ethically-minded people choose to boycott these models, I believe passive resistance alone cannot break the structural grip of big tech. To truly “liberate” these technologies and turn them into public goods, we need to look beyond moral high grounds and engage with the material basis of AI—specifically compute, data, and the relations of production.
I've written two posts exploring this through the lens of historical materialism. The first piece analyzes why current “open source” definitions struggle with LLMs, and the second discusses what it means to “act materialistically” in our imperfect world. My goal is to suggest a path forward that moves from mere boycotting to a more proactive, structural socialization of AI infrastructure.
If you've been feeling uneasy about the AI landscape but aren't sure if boycotting is the final answer, I'd love for you to give these a read:
#LLM #AI #opensource #historicalmaterialism #histomat #materialism #digitalcommons
Hi @hongminhee,
maybe #LLMs are the X-ray of IT.
In the early days used like candy. (Kids got their feet x-rayed in stores on open appliances, so the parents could see if the shoes fit. No kidding)
As experience grew, use was regulated and cut down increasingly. But it's still used to this day. For narrow usecases. Applied carefully.
Admittedly I doubt LLMs are as useful as #Xray. I think it's rather the #asbestos (which made wonderful things of concrete possible but mostly wasn't worth the downsides).
Did you know you can install HEASOFT's XSPEC model library for #xray #astronomy in seconds with #python pip?
No? Well that's because you couldn't. But now you can: