Machine learning and "artificial intelligence" is on a lot of people's minds right now, largely because of ChatGPT. With its widely-available public demonstration, the not-so-aptly named OpenAI group (ChatGPT is not open source) has shown the public that when you point a whole cloud of computing power back at the internet, you can generate believable text about nearly any subject. As many people have pointed out, there's a big difference between "believable" and "correct" of course, but on a superficial level it seems like ChatGPT is a valid source of surface-level summary output. ChatGPT isn't open source, but nearly everything it outputs is based on open knowledge. It's based on content you and I have put onto the internet for others. Does this mean ChatGPT has joined a community? Is ChatGPT contributing to improving shared knowledge? Or does it just reduce how many internet searches you have to do before arriving at a general idea of what might be an answer to your question?
Benefits of the contrary
You're probably a member of some community, whether it's a community of an open source project or even your local neighborhood. Either way, you've probably noticed that sometimes people can be annoying. It's a fact of life that people have opinions, and often those opinions are in conflict with one another. When there's a disagreement over how something ought to be done, it usually feels like time's being wasted. After all, you know the best solution, but instead of putting it into action, you have to spend all day convincing everyone else of its merit. It would be so much easier if everyone would just agree with you, right?
Disagreement is also uncomfortable. It leads to difficult conversations. You have to find a compromise or else convince somebody to see things your way, even as they try to convince you to see things their way. It's not easy, and it's often not what you want to be doing at any given time.
Of course, most adults understand that there's power in the contrary. A bot might be able to emulate a contrary opinion, but there's a difference between an opinion and stubbornness or obstinacy. Differing opinions, formed from expertise and experience, are vital for successful and fruitful collaboration. As uncomfortable as they may be, differing opinions on the "right" way to do something is the best way to stress test your ideas. By looking at the contrary, you can identify your preconceptions, biases, and assumptions. By accepting differing opinions, you can refine your own.
Spark of originality
A bot armed with machine learning can only invent ideas from existing ideas. While there may be value in distilling noise into something singularly tangible, it's still just a summary of notions that have come before. A gathering of real human actual minds is powerful because of the seemingly irrelevant and unexpected ideas that form from conversation, iteration, agreement, disagreement, and diversity of experiences and backgrounds. It might not make logical sense for me to base my CI/CD pipeline on the strategy I invented for last night's tabletop roleplaying game, but if that served as inspiration for something that ends up being really good then it doesn't matter in the end. There's an irrationality to interpreting the world through your experience embroidering or gardening or cooking or building LEGO sets with your kid, but that doesn't make it invalid. In fact, it's the ability to connect inspiration to action that gives birth to invention. That's not something ChatGPT can learn from the internet.
System design
ChatGPT and other AI experiments may well have their use in reducing repetitious tasks, or for catching potential bugs, or for getting you started with a particularly confounding YAML file. But maybe the hidden message here is actually a question: why do we think we need ChatGPT for these things? Could it be that, after all, these processes need improvement themselves? Could it be that maybe writing some "simple" YAML isn't as simple as it at first seemed? Maybe those bugs that need an artificial intelligence to catch are less a disease than a symptom of over-complex language design or a failure in how we teach code, or just an opportunity to develop easier entries into programming.
In other words, maybe machine learning bots aren't the solution to anything, but an indication of where we're doing a disservice to ourselves. In open source, we design the systems we interact with. We don't have to design chat bots to help us understand how the code works or how to program, because we're the inventors. We can redesign around the problems. We don't need a chat bot to coalesce and condense the confusion of the worldwide community, because we can create the best solution possible.
Human connection
Community is about people. Making connections with other people with a shared interest and passion for something is what makes communities so fulfilling. Both the disagreements and the moments of shared inspiration are profound experiences that we humans bring to one another in our forums, chat rooms, bug reports, conferences, and neighborhoods. As an open source community, we create technology. We create it openly, together, and with a genuine interest in sharing experiential knowledge. We value diversity, and we find value in the perspectives of novices and experts alike. These are things you can't distill in machine learning chat bot, whether it's open source or not (and ChatGPT is not).
The open source community thrives on the genuine interest in sharing. That's something ChatGPT cannot emulate.
3 Comments