We’ve written quite a bit about CI/CD in general and Jenkins in particular, from top Jenkins plugins to comparing CI systems parallelization. But to sit down (virtually) with Jenkins founder, Kohsuke Kawaguchi (Launchable), also known as KK, who in his own words “accidentally started creating what eventually became Jenkins,” along with a panel of DevOps experts from Cloudbees (Oleg Nenashev), Verifa (Ewelina Wilkosz), and ourselves, Incredibuild (Dori Exterman), for a genuine talk about the evolution of CI/CD? Let’s just say it’s a dream come true. We broke down the discussion to past, present, and future (as could be expected).
This piece provides selected insights from the webinar. To watch the full webinar, click here.
It was very interesting to get the perspective of our panelists on past events since all of them have been in this field for a while now, even before it was called DevOps, before there was CI/CD, or before there was agile. We wanted to get their take on some of the things we got right and what we should have done differently, as well as any other insight into where all this evolved from and where it started.
“Now we could harness a lot more computers around the software development […],” said Kohsuke Kawaguchi (we couldn’t agree more!), “back then you were really only using the computer in front of you, or just a laptop, or in my case, a sun workstation, but as the computing became more abundant, it made more sense to take advantages of that.”
Another issue that exists nowadays but was less prominent in the past, according to Kohsuke, is collaboration: “Software development became a little more collaborative in the past two or so decades. Back then it was much more individual work, and the collaboration required a common understanding of the state of the world. CI systems provided a common truth that people can agree on, like which test is passing and what change broke something.”
Oleg Nenashev described how the CI/CD space has evolved and emphasized Jenkins’s contribution to this evolution: “Every few years we just live in a completely new world because environment changes, development tools change, programming languages change, so when Hudson was created there were even no virtual machines, then we had, virtual machines then we had containers, now we have Kubernetes clusters and for every [piece] of this environment you can build a new solution which would be totally optimized. For me, the biggest achievement of Jenkins is that it paved the way for a new generation of tools because when you start looking, many of them actually learned from Jenkins.”
Ewelina Wilkosz shared her developer perspective of Jenkins and CI in past days: “I remember my first experience with continuous integration is that it was something that happened behind closed doors, so maybe the doors were made of glass because we could actually see if Jenkins is red or green, but it was something very much detached from what we were doing. I had no idea what’s the tool and who’s maintaining it and I slowly learned because I became a member of a DevOps team, so I became a little bit closer, but there was this wall between developers and people that were maintaining the CI system for them.”
One of the benefits of having a CI is the fast feedback cycle for developers. Ewelina recognized that, but shed a light on the downside: “So it was great because we, developers, we were getting feedback and we were getting this feedback fast, but it wasn’t always the feedback that we would ask for because we were not involved in setting up the tool configuring and using the tool, we were just you know looking at it and it took some time for some a little bit more proactive, people to actually break down the wall and make their way to the to the room behind the glass doors, so it slowly started overlapping, developers were involved with some parts of whatever was going on with Jenkins but I do remember that the most, it was something that some smart people somewhere did and they just delivered the results to us.”
Ewelina tied this perspective to what we should have done differently: “The thing that, I think, we could have done a little bit better from the start is to make this CI system part of the developer’s responsibility much earlier.”
Dori Exterman wrapped the past discussion up with a big thumbs up to Jenkins and its undeniable success: “I think that the fact that Jenkins is so widely used today across so many industries and that it is such a huge and vibrating ecosystem is the best kind of proof that the concept of the platform is valid, and it is valid for many years, so this is the only proof we need.”
However, Dori did point out what could have been different: “We could have had a better concept of visibility and quality that is entwined into the Jenkins platform itself, that kind of connects and drives insights […] something in the platform that connects and drives insight from all the various and many times fragmented and siloed tools that are used within the CI/CD process. This is something that I feel we could have done or established something right at the beginning […] I feel that we are still missing a more holistic approach to this huge ecosystem that exists within Jenkins, for example, if a change in this specific model frequently makes a specific set of tests fail, then I would like to know about it as a dev manager, so I can either shift left this test to be executed by the developers prior to committing their code to the CI/CD cycles, or add better documentation for this piece of code that will prevent developers from creating these types of changes that fail the continuous tasting, or later they can create regression bugs in production. Looking at Jenkins today with its 1500 plugins I believe that the mission of connecting them all, as part of a moralistic approach and visibility, is a mission that is still waiting for us.”
One can really get lost in all of today’s options and methodologies. Back then, things were simple – there were a handful of tools, with Jenkins leading the pack, but today there are so many. We asked our panelists: “Where do you even get started in terms of implementing the latest and greatest versus optimizing legacy tools and approaches?”
Oleg took a shot at answering this ambitious question, recognizing that nowadays there are different generations of tools, but Jenkins is still a necessity: “Currently I can see there are three main generations; classic tools including Jenkins, TeamCity, and many other tools, then there is the first generation which is rather more opinionated, focused on continuous delivery like GitLab CI, GitHub Actions, etc., and there is also fifth-generation emerging now which is rather focused on the UNIX way, so not a single tool that does everything but a small tool that does a particular job well and integrates well with others.”
He continued: “So, from what I see these are three types of tools on the market at the moment, all of these tools make total sense, all of these tools should keep evolving, and for me, depending on the use case, you can take one of them. So, if you need to have a highly customized system you would rather take Jenkins or maybe something from the fifth generation when you glue everything together.
Oleg continued explaining why Jenkins is so unreplaceable: “but if you glue things together you still need an orchestrator and hence you might use Jenkins for that. So, for me, now, If I started a new project like you asked, I would either take Jenkins or the automation framework or start building something fully cloud-native with all these new technologies sticked on, but even if I did that I would still have preferred to ‘hide’ Jenkins user experience for myself because from a developer experience standpoint it doesn’t matter whether your system is, cloud-native or not, I want to have great reporting, great insights, and actually Jenkins, historic, has been a really great fit for that, it has one of the best unit tests and coverage reporting among CI/CD systems and this is what we still can leverage as users even if we live in the cloud-native world.”
Ewelina followed Oleg in discussing the present, describing how overwhelmed she feels by the wide selection of tools available: “The landscape is so big and constantly evolving. I can’t keep up. I’m the one who’s supposed to specialize in it, and I’m sure I don’t know most of these tools, so this is a little bit scary […].”
She approaches the CI by focusing on developers’ experience and their preferences: “The main driver is the user experience and user satisfaction because at the end of the day it’s the developers that have to be happy with the way they work and create their software.”
In addition, Ewelina recognizes the cloud as a key player in today’s developers’ experience: “The great thing about the present is something that five years ago, for me, would be pretty difficult to imagine, that the cloud is becoming a standard in those big enterprise companies. I thought they would stick to the prem because of security, but the cloud is there, and most of the developers can create their own virtual machine just like that, and try different tools, so they don’t have to guess, they can try things, they can experiment, they can figure out what works best for them, and depending on what they are doing, there are things that work out of the box, and it makes no sense for them to have a full-blown instance of something that they have to maintain on their own […].”
Dori turned the spotlight to productivity: “Myself, as an enthusiastic evangelist for development productivity, I believe that productivity and agility are where I put a huge focus on, at least if you ask me in a position where I’m already in a company that has a CI/CD cycle, I believe that shortening these dev cycles is a main pillar in an endless process of ongoing improvement, but starting there, it makes a lot of sense, and this approach is also backed up by a relatively new role that is gaining a lot of momentum in the dev space, which is called a productivity engineer, which is kind of a new role, with the main purpose of ever-improving the dev workflow and productivity and performance of the dev cycle.”
Relying on his experience at Incredibuild, Dori carried on: “Something that I see very often during my decade at Incredibild is that once the dev cycle time is reduced, once you start with that, you’ll see that this cycle will run much more frequently than before, for example if you’ll take gtest executions that instead of running for 11 hours are now running for 11 minutes, and in this case, I’m referring to a real use case, then this transformation directly leads to a cycle that is now being executed much more frequently, and, of course, that besides the obvious time and productivity gain of reducing 11 hours to 11 minutes, the mere fact that you can run a cycle now 20 times a day instead of just running it two times a day gives you 18 more cycles a day to learn from and to improve […] you can add additional quality additional quality steps, add additional tools, learn more frequently, fail more frequently and fast, and experiment more, because if experimentation will take half a day just for tune-ups, I believe that you’ll do them less frequently, and then you’ll be able to improve less, so I am a big evangelist for productivity and performance, and when you have a very good solid base, then you can start improving your entire ecosystem.”
Kohsuke shined a different light on the speed aspect, towards the importance of measurement: “I think the measurement and getting more insight out of the system […]. I try to describe this in terms that the non-technical people will understand, in a measure that can show progress around it and, that way, I think the effort can gather more support […].”
Dori added to Kohsuke note: “I totally agree with that, and I believe that when we’re at measurements, especially when you want to prove your ROI. I think that one of the things that a lot of our customers are missing, or DevOps managers are missing is that they try to quantify the tangible measurements […] the time saved, the productivity gain, but there are other kinds of return of investment of their work which is less quantifiable, the ability to be in production in time, to be predictable with production, to be able to have fewer regressions […]. I think that non-tangible ROI is also an art the dev managers need to learn how to surface to their managers.”
From a future perspective, it’s amazing to think that twenty years ago we were pretty much living in the stone age, and twenty years from now, we’re going to be looking back at today and laughing at how ancient things seem. But until then, all that is left is to speculate.
We’ve asked our panelists whether they have any trends, predictions, or hopes for the future, and how to best future-proof or prepare for the unknown?
Ewelina brought up the security trend as a dominant part that will probably be with us for a while: “Since companies stopped clinging to their own on-prem and they moved to the cloud, suddenly security that was important before became critical now, and I have never heard so much talk about security related issues three or four or five years ago. And there are the whole compliance requirements, and there are always a lot of security requirements regarding that. We put things somewhere in the cloud and then someone can find a back door and then get access to the infrastructure within the company, so this is really dangerous, and I do see a lot of focus and people educating themselves in that area.”
When looking at the future, Dori differentiated between trends and megatrends: “You have these megatrends that drive trends that are more specific within this niche, so, for example, in our domain, I believe that one of the more interesting trends will be related to quality, to agility, and to fast and predictable time to market, so today in most dev organizations and dev groups, and especially in large enterprise-grade software, there is a huge technical depth in automating automated testing and test coverage, which prevents a full embrace of continuous delivery practices, and all this, I believe, is part of the megatrend of agility and time to market that pushes the software community to improve its methodologies and productivity. I believe that it’s going to be a huge challenge for dev leaders because, in order to move towards some way or another of continuous delivery, dev managers must eliminate the human factor from the equation and rely on end-to-end automation processes that they feel confident about and can also continuously improve, and I already see that this huge testing technical depth is driving exciting technologies.”
Dori also discussed AI, which we all can recognize as a well-known trend (we also mention in our DevOps trends piece): “One of the things that I’m looking at and very enthusiastic about is AI for writing unit tests or AI that helps developers write their tests faster, for example, check out diffblue which is a very interesting and exciting startup in this domain and I believe that we’ll see more of these technologies emerging to address this huge market of test automation technical debt, and of course that once AI will write these tests for us instead of us developers writing them, they will be hundreds of thousands in large projects, and having a huge number of tests will, of course, make them slow, and this circles back to my first point about productivity solutions, such as Incredibuild, distribution and caching, or running tests that matter, such as the startup Launchable that Kohsuke is targeting, so I feel that the next couple of years are going to be very interesting in the area of CI/CD and I’m very eager to see which trends are going to really flourish in that time.”
To hear additional future trends and predictions, as well as answers to relevant questions, watch the webinar.