Building Services Versus Buying Them: It’s Not a Zero-Sum Game


By Patrick McFadin, DataStax

When the gap between enterprise software development and IT operations was bridged 15 or so years ago, building enterprise apps underwent a radical change. DevOps blew away manual and slow processes, and adopted the idea of infrastructure as code. This was a change that upped the ability to scale quickly and deliver reliable applications and services into production.

Building services internally has been the status quo for a long time, but in a cloud-native world, the lines behind cloud and on-prem have blurred. Third-party, cloud-based services, built on powerful open source software, are making it easier for developers to move faster. Their mandate is to focus on building with innovation and speed to compete in hyper-fast markets. For all application stakeholders—from the CIO to development teams—the path to simplicity, speed, and risk reduction often involves cloud-based services that make data scalable and available instantly.

These points of view aren’t far apart, and they exist at many established organizations that we work with. Yet they can be at odds with one another. In fact, we’ve often seen them work in ways that are counterproductive, to the extent that they slow down application development.

There might be compelling reasons for taking everything in-house but the end users are voting with execution. Here, we’ll look at the point of view of each group, and try to understand each one’s motivations. It’s not a zero-sum game and the real answer might be the right combination of the two.

Building services

Infrastructure engineers build the machine. They are the ones who stay up late, tend to the ailing infrastructure, and keep the lights on in the company. Adam Jacob (the co-founder and former CTO of Chef Software) famously said, “It’s the job of ops people to keep the shit-tastic code of developers out of your beautiful production infrastructure.” If you want to bring your project or product into the sacred grounds of what they’ve built, it has to be worthy. Infrastructure engineers will evaluate, test, and bestow their blessing only after they believe it themselves.

Tenets of the infrastructure engineer include the following:

  • Every deployment is different and requires qualified infrastructure engineers to ensure success.
  • Applications are built on requirements, and infrastructure engineers deliver the right product to fit the criteria.
  • The most cost-effective way to use the cloud is to do it ourselves.

What infrastructure engineers care about

Documentation and training

Having a clear understanding of every aspect of infrastructure is key to making it work well, so thorough and clear documentation is a must. It also has to be up to date; as new versions of products are released, documentation should bring everyone up to speed on what’s changed.

Version numbers

Products need to be tested and validated before going into production, so infrastructure teams track which versions are blessed for production; updates must be tested too. A critical part of testing is security, and we are generally behind the latest cutting edge, so we have the most stability and security.

Performance

Performance is critical, too. Our teams have to understand how the system works in various environments to plan adequate capacity. Systems with highly variable performance characteristics – or those that don’t meet the minimum – will never get deployed. New products must prove themselves in a trial by combat before even being considered.

Using services

Installing and running infrastructure is friction when building applications. Nothing is more important than the speed of putting an application into production. Operational teams love the nuances of how things work and take pride in running a well-oiled machine, but developers don’t have months to wait for that to happen. Winning against competitors means renting what’s needed, when it’s needed. Give us an API and a key, and let us run.   .

When it comes to infrastructure, developer tenets include:

  • Infrastructure has to conform to the app and not the other way around
  • Don’t invent new infrastructure—just combine what’s available
  • Consume compute, network and storage like any other utility

Things service consumers care about

Does it fit what I need, and can I verify that quickly?

The app is the center of the developer’s universe, and what it needs is the requirement. If the service being considered meets the criteria, this needs to be verified quickly. If a lot of time is spent bending and twisting an app to make a service work, developers will just look for a different service that works better.

Cost

Developers want the lowest cost for what they get. Nothing so complicated that a spreadsheet is required. With services, developers don’t necessarily believe in “you get what you pay for,” with more expensive being better. Instead, they expect the cost to decrease over time from a service provider finding efficiencies. 

Availability

Developers expect a service to always work, and when it doesn’t, they get annoyed (like when the electricity goes out). Even if there is an SLA, most probably won’t read it—and will expect 100% uptime. When building my app, I assume there will be no downtime.

In the end, the app matters most

From working with a lot of organizations for whom applications are mission-critical, we’ve often seen that these two groups don’t work particularly well together—at times, their respective approaches can even be counterproductive. This friction can slow application production significantly, and even hamper an organization’s journey to the cloud.

This friction can manifest itself in several ways. For instance, a reliance on home-grown infrastructure can limit the ways that developers access the data required to build applications. This can limit innovation and introduce complexity to the development process.

And sometimes balancing cloud services with purpose-built solutions can actually create complexities and increase costs by watering down expected savings from moving to the cloud.

Application development and delivery is cost sensitive, but it requires speed and efficiency. Anything that gets in the way can lead to a dulled competitive edge, and even lost revenue.

Yet we also know of organizations that have intelligently combined the efforts of infrastructure engineers, who run your mission-critical apps today, and those who use services to build them. When the perspective and skills of each group is put to good use, flexibility, cost-efficiency, and speed can result.

Many successful organizations today are implementing a hybrid of the two (for now): some bespoke infrastructure mixed with services rented from a provider. Several organizations are leveraging Kubernetes in this quest for the grand unified theory of infrastructure. When describing a deployment model, there are blocks that create pods and service endpoints, with  other blocks that describe endpoints on a pay-per-use method. If you are using any cloud with Kubernetes, think storage and network services.

There are other important elements to an organization’s universe of services — whether they’re built or bought. Standard APIs are the de facto method of serving data to applications — and reduce time to market by simplifying development. SLAs — customer and internal alike — also clearly delineate scale and other performance expectations — so developers don’t have to.

Finally, I should point out that this is an immediate challenge in the world of open source data where I live. I work with Apache Cassandra®—software you can download and deploy in your own datacenter for free; free as in beer and free as in freedom. I also work on the K8ssandra project, which helps builders provide Cassandra as a service for their customers using Kubernetes. And DataStax, the company I work for, offers Astra DB built on Cassandra, which is a simple service for developers with no operations needed. I understand the various points of view—and I’m glad there’s a choice.

Learn more about DataStax here.

About Patrick McFadin:

DataStax

Patrick is the co-author of the O’Reilly book “Managing Cloud Native Data on Kubernetes.” He works at DataStax in developer relations and as a contributor to the Apache Cassandra project. Previously he has worked as an engineering and architecture lead for various internet companies.



Source link