The community needs the technology—you can’t have a community in which software-sharing is your way of life unless you’ve got free software to do everything. The point is, the software had the purpose of making the community possible. But part of the idea was that I wanted everyone to be part of this community—the aim was to liberate all computer users. At the same time, I was getting one lesson after another in the injustice of non-free software. MIT had bought a new machine which ran Digital’s time-sharing system, TWENEX, instead of the one we’d been developing; it had security features that allowed a group of users to seize power over the machine and deny it to others. I saw the repressive rules for student computers introduced at Harvard. I suggested they apportion a computer to each group of students living together, and let them all run it; those who were interested would develop the skill of system administration, and they would all learn to live together as a community by resolving their own disputes. Instead I was told: ‘We’ve signed contracts for proprietary software, which say we’re not allowed to let any of the students get at them.’ This was a proprietary operating system that made it possible to have programs that people could run but couldn’t actually read. It taught me that non-free software was a factor in setting up a police state inside the computer—which at the AI Lab was generally understood wisdom; I wasn’t the first to call that ‘fascism’. I was also the victim of a non-disclosure agreement, which taught me about the nature of those agreements—that they are a betrayal of the whole world.
i know im beating a dead horse but honest to god this would be so much better if it had any integration at all with (political/social) theory