10 Comments
⭠ Return to thread

The down side to writing everything yourself is that your responsibility goes W-A-Y up. For example, I was writing an HTTP|HTTPS client for every non-toy OS and I didn't want to work at the socket level and write my own crypto implementations. Did I lose out on some fine-grained control, like keeping all allocations within my memory allocation implementation or arena? - Sure.

But the advantage of using systems that are well tested, constantly improved (esp. on the security side), and dynamically linkable—so I can automatically get their latest without a rebuild of my software [assuming interface compat.]—far outweighs the benefits IMHO. There was a good article a while back comparing 'forests of dependencies' with write your own everything (and they used VLC as an example). Writing my own XML parser or de/compression implementation in the middle of a nongeneric project seems like a bad idea.

Now you do make a number of good points… I especially like your characterisation of eggheads in academia and slaves in corporations. OSs in particular seem incredibly slow, and getting slower with each new version. Unnecessary abstraction is causing dramatic slow downs. Projects which should be done with 3 people are being done with dozens. Quality incentives are wrong.

PS: On the subject of measuring software quality; what's your perspective? - It can't be all aesthetic. - High cohesion and low coupling? - Connascence? - Test and doc coverage? - Interoperability? - Final package size? - Ability to work offline and networked?

Expand full comment