Over the last 4 years the world as we know it as systems administration has changed pretty significantly. Not that it had not already started changing in this direction since about 2008-2009, but it seems that the majority of companies are pursuing systems in a different way than ever before.

In days gone by, a Systems Engineer or System Admin for Unix/Linux systems would generally automate the build and configuration of systems in a similar fashion – and this was the automated way.

For a time kickstart was used, and still is in some cases. The process usually involved PXE Boot, DHCP, a kickstart server, a kickstart image, kickstart scripts and then some other automation script that completed the system configuration.

After that, a common tactic was building images using Packer, deploying the image to a standard location, then booting that image and attaching it to some configuration management system, such as puppet or chef. Often times for datacenter servers, this is still the method used.

But each year, developers want infrastructure faster, and in reality, all they want is to deploy their code and not worry about the infrastructure. Gone are the days that developers put in a request for a server and wait weeks for it to be built, or even wait days or minutes for it to be built. They want to deploy their code into a managed environment. The landscape in this space is varied.

Once upon a time LSF Grid or similar compute grid systems were used – built and managed internally to the company. Other PaaS systems were developed and have been used, such as PCF, but most recently the rise of Kubernetes is pretty impressive.

I went to Kubecon in Seattle this past month, and it was pretty amazing to hear about the growth of the conference (number of attendees year over year) and the number of vendors creating solutions for managing K8s clusters. Even with a managed platform though, there are the headaches that come with deploying apps as there is no one way to do it, there are many ways. But this is typical of open source solutions and the Linux culture in general. Why build one way with one vendor when you can have 10,000 different people work in groups of tens and hundreds of engineers to build things quickly.

Every company is adopting the concepts of infrastructure as code. Agile development practices, for good or bad, and ease of deployment of applications. What this means is that apps get built and deployed quickly, and it is not uncommon for large enterprise companies to have thousands of apps running, doing similar or conflicting things all over the company.

The good part is that it can be done fast, and great ideas come along. If you have good communication strategies to get the engineers to talk together, share information and learnings, and have open strategies for collaboration without keeping silos in effect or building them higher, there is something to be gained by having a lot of different engineers coding similar things.

The challenge is alignment. The challenge has always been standards to prevent duplication of systems.

In the quick moving environment, internal and cloud systems grow quickly which causes more points for audit compliance, security and the spread of a beast you cannot control or manage.

In some ways, it is like releasing a spring that has been coiled too tightly for too long and it flies off quickly in an undetermined direction. In other ways, it is like a bird gaining its wings, and it flies off with a wonderfully new found freedom.

Which way do you see this going where you work?