Gordon Haff is Red Hat technology evangelist, is a frequent and highly acclaimed speaker at customer and industry events, and is focused on areas including Red Hat Research, open source adoption, and emerging technology areas broadly. He is the author of How Open Source Ate Software from Apress and co-author of Pots and Vats to Computers and Apps: How Software Learned to Package Itself in addition to numerous other publications. Prior to Red Hat, Gordon wrote hundreds of research notes, was frequently quoted in publications like The New York Times on a wide range of IT topics, and advised clients on product and marketing strategies. Earlier in his career, he was responsible for bringing a wide range of computer systems, from minicomputers to large UNIX servers, to market while at Data General. Gordon has engineering degrees from MIT and Dartmouth and an MBA from Cornell’s Johnson School.
Gordon Haff (He/Him/His)
| Follow @ghaff
Authored Comments
I don't disagree with any of that. In fact, I don't really talk much about the speed of failure in my post. I would argue though that speed of iteration is part of the equation. It's fine to understand why some lengthy expensive project failed after it's put into production but you may not have a "next time" to apply any lessons.
It's really a mix today and a matter of what type of workloads you deal with. There's a fair bit of different kernel versions, different configurations/tuning, different OSs among traditional enterprise workloads. For new styles of workloads (microservices, etc.) not so much. That's why I expect a mix of hardware virtualization and containers for the foreseeable future.