833.544.1315 login <<< request a demo >>>
login request a demo

Newsletter

*Required

Authentication Required

*Required

Back to Blog

Infrastructure as Code:
What Is it and Why Should You Care?

Spencer Hess - September 1, 2020

Necessity is the mother of invention, sure, but you can just as easily argue that invention is the mother of necessity, particularly in an era driven by information technology. Consider the Internet, for example. Its invention spawned previously unimaginable ideas that quickly grew to necessities. At this late stage in the Information Age, Internet connectivity and ubiquitous access via the web are baseline necessities. 

For your business today to stay relevant and competitive, you need not only a web presence, but also tools that allow you to harness the power of the internet to simplify the creation, management and distribution of the resources that host the software solution suited for your industry. Infrastructure as code is one of the technological offspring of this business necessity.

What is Infrastructure as Code?

Infrastructure as code is code that includes machine-readable definitions that can automate the process of provisioning and configuring the resources that will host a software solution. 

Infrastructure as code for top mlm saas provider image

Infrastructure as code might run on an on-site data center to simplify management of that center’s resources. It can also contain proprietary coding that runs in a private data center and is later migrated to the cloud. These types of solutions are improvements to the constant manual management of bare-metal servers comprising data centers.

However, these types of solutions don’t tap another potential advantage of running infrastructure as code. When coded infrastructure is designed to communicate directly to a cloud using the cloud’s application programming interfaces (APIs), then it and the platform it is configured to run reap the full advantage of being in the cloud, including limitless access to resources that are always running the latest technology. 

Microsoft Azure is one of the leading public cloud platforms and has pioneered the development of the APIs and resource templates that enable Infrastructure as Code. At DirectScale, our Infrastructure as Code communicates with the Azure platform leveraging these APIs to allocate and configure resources. When we developed our coded infrastructure and platform, we largely used .NET, a programming language developed by Microsoft that, not surprisingly, is a first-class citizen in Azure. (When I say, “first-class citizen,” I mean that Azure recognizes and welcomes .net and that .net in turn plays nicely and comfortably in the Azure sandbox.)

Our use of information as code automates the process of managing our cloud-based data centers, automatically creating and provisioning new resources (such as databases and virtual machines) as necessary.

Why Should You Care?

Infrastructure as code, and specifically our implementation of it, offers several benefits. (Some of these overlap the “Benefits of Cloud Computing.”)

  • Instant Provisioning
  • Scalability
  • Testing Before Launching
  • Reliability 

Instant Provisioning:
Purchase to Productivity in a Snap

When you choose a Software as a Service (SaaS) solution to streamline your direct selling business processes, the time it takes for you to go from purchasing to productivity could take months, weeks—or minutes.

The way we’ve designed and implemented our infrastructure as code enables the latter, a benefit known as “instant provisioning.” What we do is that when we decide to create a resource, such as a database or a virtual machine, we use an Azure Resource Manager (ARM) template to define not just the resource we want but all of the properties associated with that resource and then save that code to a repository. 

What this allows us to do is create infrastructure for new clients simply by running the saved code, which in turn automatically provisions the needed resources.This enables us to get a client up and running within minutes of a contract signing.

When your new resources are allocated in the cloud, you automatically receive your login credentials so that you can start customizing our solution and potentially taking your first order within a few hours.

Scalability:
The Sky’s the Limit

Scalability is a system’s capacity to allocate resources and thereby improve performance as needed. There are two methods of scaling: vertical scaling and horizontal scaling.

Vertical scaling involves adding to or upgrading existing hardware. This process requires hands-on management that is costly and doesn’t necessarily allow for seamless expansion. 

Horizontal scaling  involves automatically provisioning resources when and as needed, a handy little feature made possible when infrastructure is implemented as code. 

Our coded infrastructure contains metrics that dictate performance requirements, so that if a client load suddenly is too heavy for the resources they’re currently using, our solution automatically provisions access to more. 

Our solution is like an elastic band that expands and contracts to quickly accommodate your needs for increased or reduced loads on demand. The process is seamless, meaning you won’t know what’s happening, you’ll just reap the reward of always getting the performance you pay for and not paying for resources you don’t use.

Because our solution was designed to communicate directly to Azure, our clients gain the benefits of both vertical scaling and horizontal scaling without the hassle of either. Our solution ensures automatic and seamless horizontal scaling, and Azure manages the cost and hassle of vertical scaling. 

Bottom line: when your website needs more power, our solution scales on the fly to handle the increased load. And since we’re in Azure, the sky’s the limit in terms of the load we can handle.

Testing Before Launching:
Smooth Computing Insurance 

When you implement infrastructure as code, your infrastructure gets the same treatment as any other code, the benefits of which include easy upgrades that can be tested before going into production. 

With infrastructure as code, any changes to the infrastructure can be made in a single location and then tested to ensure they are free from compatibility and integration issues (among other things) before they go into production. 

For example, when necessary, DirectScale makes changes to its coded infrastructure, tests the changes, and only then applies them to all the infrastructure elements in production directly through Azure’s APIs by way of their online templates. This process eliminates the risk of human error: make and test the change once and apply it across the board. 

Reliability:
Restore the Dotted I’s and Crossed T’s 

Another benefit of running our infrastructure as code in Azure is that we can provision our platform anywhere in the world with the click of a button. 

For example, suppose an Azure data center in the eastern United States goes down and it just so happens that your database and other resources were running in that failed data center. Because of the way we’ve designed our solution, we can provision everything very quickly somewhere else (Australia, western United States, Europe—anywhere) so that the downtime associated with that disaster is minimal, a benefit known as failover. 

In fact, we promise our clients not only minimal downtime in the event of failure, but also an exact duplicate of what they paid for—all the i’s dotted, the t’s crossed—everything as it was. All DirectScale data, including our client data, is replicated in real time to more than one data center. Furthermore, all our databases are point-in-time restorable for 35 days, meaning our clients can choose any minute, including this one, and restore the database precisely as it was in that moment.

SaaS solutions that are limited to on-site or private data centers and continue to run their databases out of their own data centers must accommodate the expense associated with setting up and managing redundant locations to ensure backup in the event of failure.  These expenses are passed directly to their clients.  Where DirectScale leverages the power of the Azure cloud to ensure that your data is protected at a lower cost and takes full advantage of the global scale of the Azure platform.

Infrastructure as Code:
Invention and Necessity

So when you’re looking for a web-based solution to meet your business needs, keep in mind that the fastest and most efficient solutions are based on infrastructure as code that interfaces directly with a public cloud. Consider asking potential service providers questions like these:

  • Does your solution run on-premise or in the cloud?
  • What technology are you using to manage computer resources and does that technology allow for infrastructure as code?
  • Does your solution allow for coded infrastructure to be defined either through an API or template?
  • Does your solution automate the process of creating and provisioning resources?
  • Does your solution ensure that when you’re creating new resources, you get what you want every time?

Benefits of Cloud Computing

Cost:

Cloud computing reduces several expenses, including hardware and software systems, the IT experts needed to set up and manage those systems, and the electricity required to power and cool them.

Scalability:

Cloud computing automatically delivers resources—more (or less) computing power, storage, and bandwidth—when and where needed.

Performance:

A public cloud with clout (e.g., MS Azure) runs on a worldwide network of secure data centers that are upgraded routinely so is always-already running the latest and fastest hardware.

Security:

Most cloud providers offer policies, technologies and controls that help better protect clients’ data, applications and infrastructure.

Speed:

Cloud computing is typically self service and on demand, so computing resources can be provisioned within minutes.

Reliability:

Cloud computing makes data backup, disaster recovery and business continuity easier and less expensive because data can be mirrored at multiple redundant sites on the cloud provider’s network.

Cost:

Cloud computing reduces several expenses, including hardware and software systems, the IT experts needed to set up and manage those systems, and the electricity required to power and cool them.

Scalability:

Cloud computing automatically delivers resources—more (or less) computing power, storage, and bandwidth—when and where needed.

Performance:

A public cloud with clout (e.g., MS Azure) runs on a worldwide network of secure data centers that are upgraded routinely so is always-already running the latest and fastest hardware.

Security:

Most cloud providers offer policies, technologies and controls that help better protect clients’ data, applications and infrastructure.

Speed:

Cloud computing is typically self service and on demand, so computing resources can be provisioned within minutes.

Reliability:

Cloud computing makes data backup, disaster recovery and business continuity easier and less expensive because data can be mirrored at multiple redundant sites on the cloud provider’s network.

Glossary

  • API: An Application Programming Interface (API) is the go between that enables different computing systems to effectively communicate without either system having to share proprietary information. (Imagine you need to communicate the same request to four people who speak different languages. An API is like the translator who communicates your message and returns four responses in the language you understand.) 
  • Azure Resource Manager (ARM) Template: Used to implement infrastructure as code for Azure solutions. The template is a Javascript Object Notation (JSON) file that defines the infrastructure and configuration of your solution. The template enables you to define a resource without having to write a sequence of programming commands to create it. In these templates, you specify the resources you want to create and their respective properties. 
  • Database: A structured set of data that runs on a computer and is electronically accessible.
  • Data Center: A collection of computers (possibly housed in a single building or network connected) that store, process and distribute large amounts of data.
  • Failover: The ability to quickly provisionan identical twin of a software computing program or platform to minimize downtime in the event of a failure due to natural disaster, for example. 
  • High Availability: The process of running multiple instances of a software program, application or platform on different servers that are possibly in different parts of the world to ensure zero downtime. The process is effective but very costly. With high availability, you pay for just-in-case resources that you aren’t using. 
  • Public Cloud: Owned and operated by third-party providers, public clouds deliver computing resources (like servers and storage) over the Internet. Consumers access these services using a web browser.
  • SaaS: Software as a Service or SaaS (pronounced “sass”) is a method of delivering software by way of an online subscription that a client typically accesses via the web. SaaS moves away from the traditional paradigm of building a program, then installing and running it on a physical computer.  
  • Vertical Scaling: the process of adding more resources to an existing computing system to increase or otherwise improve performance.  For example, vertical scaling could involve adding more capable bare-metal servers to a data center or increasing the memory on existing ones.  
  • Horizontal Scaling:  the process of increasing the number of instances of a software solution to handle an increase in load (and thereby improve performance) or reducing the instances to ensure a client pays only for what they’re using.
  • Provisioning: The process of starting up or shutting down an instance of a platform running in the cloud. With a software platform running atop infrastructure as code that is designed to interface directly with the cloud, the entire platform can be launched with the click of a button.
  • Virtual Machine: A virtual machine is an emulation of a full operating system that runs in the memory of another operating system, essentially giving a single host computer the functionality of two computers.