• Skip to main content
  • 855-855-8300
  • Contact
  • Architecture
    • Overview
    • Benefits
    • Schedule a Demo
  • IT Initiatives
    • VMware Alternative
    • VMware DR
    • SAN Replacement
    • Hyperconverged
    • Small Data Center
    • Secure Research Computing
  • Resources
    • About VergeIO
    • Blog
    • Documents
    • In The News
    • Press Releases
    • Support
    • Technical Wiki
    • Videos
    • Webinars
  • How to Buy
    • Versions
    • Test Drive
×
  • Architecture
    • Overview
    • Benefits
    • Schedule a Demo
  • IT Initiatives
    • VMware Alternative
    • VMware DR
    • SAN Replacement
    • Hyperconverged
    • Small Data Center
    • Secure Research Computing
  • Resources
    • About VergeIO
    • Blog
    • Documents
    • In The News
    • Press Releases
    • Support
    • Technical Wiki
    • Videos
    • Webinars
  • How to Buy
    • Versions
    • Test Drive

Blog

VMware Scale Comparisons

April 18, 2023 by George Crump Leave a Comment

VMware Scale Comparisons to VergeOS

During our “InBrief” Event with Truth In IT, one of the most frequently asked questions was about VMware scale comparisons. This series of questions moved beyond the more general Comparing VMware to VergeOS and focused specifically on how VergeOS handles the demands of scale compared to VMware.

To learn more about VergeOS’ scaling capabilities, watch our on-demand webinar “How to Eliminate the Data Center Scale Problem.”

Understanding VMware Scale Methods

Before making any VMware scale comparisons, you must understand its scaling methodology. How VMware scales depends mainly on the infrastructure on which it resides. Most VMware environments use the classic three-tier architecture with physical network switches, servers, and a separate storage system. Most organizations have one primary switch and server vendor, although a few alternate brands may be in use. However, the storage tier, especially as the environment scales, typically has multiple storage systems for different virtual machine (VM) types or use cases.

A less popular alternative is the classic hyperconverged infrastructure (HCI) which loads software-defined networking (SDN) and software-defined storage (SDS) software onto the same nodes as VMware ESXi. In most cases, the SDN and SDS software run as VMs and are subject to ESXi capabilities. As a result, the organization still has a three-tier architecture. It is just that those tiers are now logical instead of physical. This logical representation of the three-tier architecture is why the classic three-tier architecture remains so prevalent.

These two infrastructures impact the scalability of VMware. VMware claims to support 96 nodes per ESXi cluster in the classic three-tier architecture, but only 64 nodes within its HCI cluster because of limitations within the vSAN cluster.

Understanding VergeOS Scale Methods

VergeOS is an ultraconverged infrastructure (UCI). Similar to HCI, it does not use an external storage array. Unlike HCI’s use of separate SDN and SDS software inside the hypervisor, UCI integrates the networking and storage functionality into the hypervisor. This critical difference significantly improves the ability to scale the infrastructure, especially when you compare UCI to HCI. There is no technical limit on the number of nodes VergeOS support, and there is no case of “diminishing returns” as you scale. We have customers in production with over 60 active nodes in a single VergeOS instance.

VMware Scale Comparisons to VergeOS means explaining ultraconverged infrastructure
Comparing three-tier architecture to ultraconverged infrastructure

Subscribe to our Digital Learning Guide (DLG), “Understanding the VergeOS Architecture,” for a deep dive into our ultraconverged infrastructure. Our DLGs are white papers delivered in weekly bite-sized chunks.

Comparing VergeOS to VMware Scale

When making VMware scale comparisons, there are two aspects to remember. First, what are the technical limitations of scalability, and second, what are the ramifications of scaling the cluster on resource utilization and organizational flexibility?

VergeOS’ Technical Scale is Better Than VMware Scale

VergeOS is superior in raw node count versus VMware, enabling large enterprises to meet even the most demanding processing and storage performance requirements. Again, we have production customers with over 60 nodes, hundreds of virtual data centers, and thousands of virtual machines. These customers have been running VergeOS at this level of scale for years. VergeOS customers also don’t need to worry about scaling complexity. With VergeOS, there is only one software package, not three or more.

VergeOS’ Efficient Scale is Better Than VMware Scale

VMware scale comparisons to VergeOS should also include how efficiently the infrastructure scales. While raw node count may be critical for some data centers, most organizations seek more efficiency and flexibility in how the VMware alternative scales. Efficient scale means only adding additional nodes after the existing nodes’ resources have been used to their full potential. An efficient infrastructure can deliver more performance from fewer nodes, which lowers both capital and operational costs.

The comparison of efficient scale is where VergeOS has a clear and more practical advantage. We repeatedly have VMware customers moving to our platform and are seeing better performance from their applications even though it runs on the same hardware. The lack of efficiency is why many customers who originally consider HCI end up selecting a classic three-tier architecture. UCI delivers the efficiency they need.

As a result, customers can add even more workloads to the environment without purchasing additional hardware. In some cases, they have been able to delay new server purchases for years, thanks to the implementation of VergeOS. The efficiency results from integrating the networking and storage components and, frankly, just better execution of the code. Our efficiency means that while customers can scale further with VergeOS, they won’t have to scale as often.

VMware Scale Comparisons to VergeOS must include range of scale
VergeOS can scale small for the Edge and large for the Enterprise. Each node is used to its full potential.

VergeOS’ Flexible Scale is Better Than VMware Scale

VMware scale comparisons to VergeOS must also include how flexible it is to scale the cluster. Most customers will want to start small and add nodes as workloads and organizational growth demand it. Instead, most customers will grow their environment over time. During that time, their needs will change, and so will technology. The first dozen or so nodes they start with may only be available after the time they add their fiftieth node.

With VMware, you must create an entirely separate cluster if you add different servers with different configurations, like an AMD processor instead of an Intel processor. While there is a common management interface, plenty of functions need to be set separately. HCI Storage is a good example. It is locked to the cluster and can’t be shared across clusters.

With VergeOS, IT can bring in servers of vastly different configurations, different processors, different storage media types, and even with GPUs. They are all managed by a single VergeOS environment. Resources can be isolated to a single virtual data center or distributed across multiple virtual data centers.

VergeOS’ flexibility means that the software can adapt to the organization’s needs and integrate new hardware innovations. IT can use VergeOS for mainstream applications with modest performance requirements, then add high-performance nodes with GPUs and NVMe flash or high-capacity nodes for file sharing and backup. Each of these different hardware configuration types is still managed within the same VergeOS instance.

VMware Scale Comparisons to VergeOS must include flexible scale
Flexible Scale – A Single VergeOS instance can manage nodes of vastly different types

Conclusion

VMware scale comparisons to VergeOS will show how superior VergeOS is in all the ways IT measures scalability. It is affordable for small data centers and enables them to deliver more performance on less hardware while also providing robust networking functionality. Enterprises can support various workloads thanks to VergeOS’ ability to mix nodes and use Virtual Data Centers. There is no technical limit on how many nodes, VergeOS supports, but its efficiency means you will require less than VMware.

It is also important to remember that VergeOS is a complete offering and requires no compromises versus VMware. It provides robust data protection, massive capacity scalability, and almost bare-metal performance of virtualized applications. VergeOS’ storage capabilities are so powerful that many customers switch to VergeOS as part of a SAN replacement project.

VMware Scale Comparisons to VergeOS must include enterprise storage features
VergeOS provides a complete enterprise feature set like global inline deduplication, immutable snapshots, and WAN-optimized replication

Filed Under: Blog, Virtualization Tagged With: Alternative, Blog, scale-out, Virtualization

How to Get Fast ROI on HCI Refreshes

February 28, 2023 by George Crump Leave a Comment

How to Get Fast ROI on HCI Refreshes

Customers using hyperconverged infrastructures (HCI) are starting to deal with aging installations and are looking for a fast ROI on HCI refreshes. Vendors like Nutanix and VMware, with vSAN, are instituting dramatic increases in renewal prices. They also overemphasize the cloud and only focus on their largest customers. With infrastructure costs rising, customers are looking for an exit strategy that will lower upfront and long-term costs.

At the same time, there is a general frustration with the broken promises of the first wave of HCI. While it simplifies infrastructure, it does not come close to the goal of a private cloud. IT planners are looking for a solution that will lower operational costs, provide self-service IT, improve availability, and better protect against threats like ransomware. FAST ROI on HCI refreshes requires these capabilities and lower upfront and long-term costs.

The Cost to Save

The problem with HCI refreshes is that alternative solutions almost always require a substantial new investment in hardware. That new hardware either comes as a bundle with their software or new hardware with restrictive and expensive specifications. The only way you start to see a return on your investment is after those savings cover the cost of the new hardware. You must go into the negative before seeing a positive return on investment (ROI).

The reason for the new hardware requirement is the alternative HCI software is as inefficient as the old HCI software, and both require more processing power, memory, and storage performance than they should. The vendor is compensating for their inefficient code by making you spend extra on hardware.

Another reason for the new hardware requirement is the lack of compatibility or supportability of the existing hardware. Forcing customers into new hardware indicates that the developers did not properly abstract the software from the underlying hardware. Their code is too dependent on specific types of hardware.

Fast HCI ROI Requires Repurposing Hardware

Ultraconverged Infrastructure (UCI) rotates the traditional three-tier data center architecture (virtualization, networking, and storage) into a single, linear software plane. VergeOS is a UCI solution that is so efficient that it can deliver better performance and workload density using the same physical hardware you have right now. As a result, your ROI starts working the day you install the software. There is no hardware acquisition cost to overcome. To learn more about how UCI read our article “Move Beyond HCI to UCI.”

Fast HCI ROI Requires Mixed Nodes

Suppose you need to add more nodes because the old system was at its performance or capacity limits. In that case, you can still leverage the existing nodes for other tasks, dramatically lowering additional spend. For example, add a set of nodes with more processing power if you need more processing. Suppose you need more IO performance; add a set of nodes with high-performance NVMe flash, or add a set of nodes with 20TB hard drives if you need more capacity. Make your current hardware work as hard as possible for as long as possible. The infrastructure software should enable you to add the minimal amount of hardware you need for its exact purpose while repurposing the existing hardware for everything else.

Fast HCI ROI requires mixed node support

VergeOS UCI can support mixed nodes, whereas most HCI solutions force you to create a separate infrastructure by node type. Mixed node support means you can complement the current hardware investment instead of replacing it. As new technology comes to market, you can add it, mixing it with older technology, ensuring that your infrastructure will run for decades instead of four to five years. UCI delivers not only fast ROI but also delivers long-lasting ROI.

Fast HCI ROI requires Global Inline Deduplication

One of the most expensive components of HCI is storage. While some HCI solutions have added deduplication, most can’t perform global inline deduplication. As a result, the feature often compromises performance, forcing customers to only use it on the flash tier or to buy more powerful processors and larger quantities of RAM. Again, these additional costs make it difficult to get fast ROI on HCI refreshes. The reason for this overhead is most HCI solutions add deduplication years after the initial product introduction. It is bolt-on and not part of the core code.

VergeOS provides global inline deduplication built into the core of the software. Its integration means the algorithm is highly efficient and has little impact on performance. Customers can deploy it without having to use high-performance drives or high-performance CPUs.

Fast HCI ROI requires More Than Data Protection

Since fast ROI on HCI refreshes counts on using older hardware, it also requires integrated data protection. Because of flat or even shrinking IT budgets, many organizations are looking to use hardware until it breaks or is not economical to repair. Because older hardware tends to fail more often than newer hardware, uninterrupted access to applications and data is critical. Backups are not enough—the time it takes to recover from even the best backup software solution is no longer acceptable.

VergeOS provides a rapid return to operations by moving an application from one node to another. It provides detailed telemetry information to help you proactively predict node failure. In these cases, you can move workloads to other nodes in the cluster with zero service interruption. If a node fails unexpectedly, instantiating the workloads on another node can happen quickly and automatically.

VergeOS also provides complete protection from drive failure and unlimited, immutable snapshots. You can retain snapshots for years and mount them instantly for rapid recovery. Combined with dual-factor authentication VergeOS provides ransomware resiliency. VergeIO’s replication capabilities leverage our global inline deduplication for WAN-efficient transfers. For example, if you are replicating several remote sites or hundreds of Edge locations, any data redundancy across those sites is only transferred once. The result is remote locations are fully protected faster, and less money is spent on WAN bandwidth.

Fast HCI ROI Requires UCI

Using UCI to get a fast ROI on HCI refreshes doesn’t mean starting over, and it is not a multi-month-long conversion project. Many of our customers start by removing a few nodes in their existing infrastructure and transferring virtual machines over to VergeOS. Migration is automatic, quick, and easy. These customers see improved performance and greater storage efficiencies. Then as they see the power and simplicity of VergeOS, they start migrating more workloads and transferring more nodes into our environment. With each transfer, costs go down, and operations become more effortless.

Other customers will buy two or three new nodes to host the VergeIO environment and start creating new workloads on it. Then gradually, they will start migrating other workloads and eventually migrate in the nodes from the old HCI cluster. Again, leveraging a diverse mix of nodes is vital to the process. Other customers will start using VergeOS as a disaster recovery target for their existing environment, lowering DR costs immediately and transitioning as it makes sense.

Next Steps

  • Demo – See a demonstration of a Hyper-V migration by our SE Director Aaron Reid.
  • Watch – A Replay of our One-Slide Webinar, “Comparing HCI Solutions,” where Aaron and I compare HCI to Nutanix and VMware vSAN.
  • Subscribe – to our HCI comparisons research notes, delivered to your inbox every two weeks.
  • Schedule a 20-minute technical whiteboard session to see how VergeOS can solve your infrastructure problems.

Filed Under: HCI Tagged With: Blog

Edge Computing’s Unique Challenges

February 21, 2023 by George Crump Leave a Comment

IT needs to understand Edge Computing’s unique challenges, so they can make the right infrastructure design decisions. Treating Edge Computing as a smaller version of the data center will put data at risk, increase complexity and raise costs. There are critical differences between Edge Computing, Remote Office, Branch Office (ROBO), and Core Data Center use cases:

Data Center Edge ROBO Core
Serviceability Limited Accessibility Accessible Local
Management Remote Remote Local
Data Protection Replication On-Premises On-Premises
Footprint Shelf Closet Data Center
Power Constrained Available Plentiful
Comparing Edge, ROBO and Core Data Centers

Edge Computing vs. “The Edge”

Edge Computing is different from what is commonly referred to as “the Edge.” When we refer to “the Edge” we are referring to a data collector, like a sensor or a Wi-Fi Camera, even though they too have a small processor of some sort. Edge Computing is the consolidation of processing power that gathers data from a variety of these sensors, and processes that data. The goal is to either make real-time decisions, like an autonomous vehicle, or consolidate the collected data and send a subset back to a larger data center.

While collecting sensor data and acting on it covers a wide swath of Edge Computing use cases, there are others. It might also be a Point of Sale (POS) system that an organization with dozens or hundreds of retail locations. Other Edge Computing use cases are content delivery systems, video surveillance processing and storage, as well as dynamically adapting retail advertising.

In addition to real-time decision-making, Edge locations may also, even with today’s network capabilities, be bandwidth constrained. The need to make the decision locally is instant, compared with the seconds required to send data to another location and respond with a decision. In these cases, instant versus seconds makes a critical difference. It may also be that the bandwidth to the Edge Computing location isn’t reliable enough, or that the cost to transmit a large amount of data isn’t worth the expense.

Register for this week’s Virtual CxO Roundtable to get answers to all your Edge Computing and Private Cloud questions.

What Makes Edge Computing Unique?

Edge Computing is unique from the core data center and remote office branch office in three key areas:

  • Available Space
  • Serviceability
  • Data Protection

Edge Computing is Space Constrained

Edge Computing’s unique challenges include small footprint
A Complete Data Center in a Shoebox

The first of Edge Computing’s unique challenges is the physical space available to host the infrastructure. As we indicate in our table, the available data center floor space shrinks from a full-scale facility in core, to a closet in ROBO, to, at best, a shelf in Edge Computing use cases. In some situations, the “data center” is the space underneath the cash register.

The constraints placed on Edge Computing mean that whatever infrastructure you deploy at the Edge needs to run, efficiently, in that small footprint. The good news is the hardware to accomplish the feat is available. Mini-servers, like Intel NUCs (next unit of computing), can provide plenty of processing capabilities while consuming a few dozen watts of power. The problem is finding an efficient software operating environment for those servers.

Edge Computing is Hard to Service

The second of Edge Computing’s unique challenges is that it is hard to get to, physically and maybe remotely. The lack of accessibility makes Edge Computing hardware difficult to service if something goes wrong. Most locations are not in major cities. Sometimes they are “in the middle of nowhere” on purpose because that is where the sensors perform best. Other times they are small towns, hours away from major airports. The lack of accessibility and serviceability make redundancy and remote operations critical.

Edge Computing Needs Redundant Availability

Redundant Edge Computing is something that IT planners may overlook, but because of the lack of accessibility, continuous access becomes critical. If the Edge location goes down, sensor data and remote transactions can’t process. It can mean the loss of critical information that can’t be recreated, or the loss of revenue and unhappy customers.

What to look for:

Given the space efficiency of mini-servers, it makes sense to deploy two or three units, even if one has all the processing power that the location needs. Redundancy at the Edge means that the software platform responsible for running operations needs to seamlessly fail to the surviving servers without complex changes to networking. It also means that a replacement server must be easily preconfigured to automatically join to the surviving servers when it arrives at the location.

Edge Computing Needs Redundant Operations

The Edge Computing solution should also be easy to remotely manage and operate. While most solutions provide some form of monitoring capabilities, these are often “after-the-fact” products. An add-on product creates a single point of management failure, and the Edge location doesn’t know something is “listening”. Instead, IT planners should look for solutions where reporting is the responsibility of the Edge Computing solution. The edge software platform should send its telemetry data to multiple points, which eliminates the single point of failure.

Moreover, the remote capabilities should include more than remote monitoring. It is not uncommon for Edge Computing locations to number in the dozens, if not hundreds. Having to log in to each location to perform an update or change to a security setting is incredibly time-consuming and increases the chances of human error.

What to look for:

IT planners need to look for a solution that can perform operations like updates or setting changes, globally. Executing once instead of individually logging in to each server increases the efficiency of the IT staff and lowers the overall cost of the Edge Computing initiatives.

Edge Computing is Hard to Protect

The third of Edge Computing’s unique challenges is it has unique data protection needs. In numerous instances, the Edge creates unique data that can’t be recreated if lost due to hardware failure or site disaster. The challenge is because of the lack of available space and operational concerns. There is no room or administrative staff to support on-premises backup infrastructure.

The Problems with Protecting the Edge with the Public Cloud

Many organizations will consider protecting this data in the public cloud, but end up ruling it out because:

  1. The recurring costs to store dormant data are too expensive
  2. The data is needed at core data center for further processing
  3. There is too much Edge data and not enough bandwidth
  4. Disaster recovery from the Public Cloud to the edge is difficult

What to look for:

IT planners need to look for a solution that can leverage the extra redundancy within their Edge Computing design to facilitate a reasonable on-premises data protection strategy. While protecting data within the same infrastructure does not technically meet the 3-2-1 data protection rule, it gets close. If the Edge solution can also replicate data efficiently, then it does meet the requirements of the 3-2-1 rule. Global Inline Deduplication is a critical requirement so that redundant data is only sent once and replication jobs are complete in record time.

Edge Computing is NOT Remote Office Branch Office

Remote Office and Branch Office (ROBO) IT infrastructures are not the same as Edge Computing infrastructures. First, in most cases, they are significantly easier to get to. Second, there is available space, even if it is a server closet, for a more robust infrastructure that includes data protection.

ROBO infrastructures also tend to support a wider variety of workloads, including file sharing, and multiple business applications as well as core infrastructure utilities. They do, however, share the need for remote operations and can certainly benefit from many of the capabilities that infrastructure at the Edge requires.

Most IT vendors can’t span all three use cases with a single software solution. They may address the specific needs of each use case, but they do so with alternative solutions which require unique training for each one, patch monitoring and implementation as well as unique data protection.

What to look for:

IT Planners should look for an infrastructure solution that can span all three location types and add in the public cloud. Imagine the efficiency of running the same networking, storage, and hypervisor software throughout your sprawling infrastructure.

VergeOS, One and Done

VergeIO is an ultraconverged infrastructure (UCI) company. UCI differs from Hyperconverged Infrastructure (HCI) in that it rotates the traditional three-tier IT stack (networking, storage, and compute) onto a linear plane through a single piece of software that we call VergeOS. The result is an efficient data center operating system (DCOS) that can deliver more performance and run a greater variety of workloads on less physical hardware. If you are being asked to do “more with less,” VergeOS is your solution.

In one bootable operating system you eliminate the need for separate storage software, proprietary networking hardware, independent hypervisors, separate “cloud” functionality, data protection software, disaster recovery software, and multiple management interfaces. All of these functions are included in VergeOS’s single piece of software.

VergeOS is able to address all of Edge Computing’s unique challenges. It provides

  • Downward scale to one or two nodes
  • Seamless redundancy, data protection and ransomware resiliency
  • A mesh-like management framework for monitoring and operations
  • upward scale for branch offices, core data centers and the cloud

With VergeOS, you don’t have to “go to” the cloud. You can “be the cloud.”

Next Steps

  • This week we are holding a Virtual CxO Roundtable on “Edge Computing and Private Cloud Infrastructures. We will answer questions about these two topics we’ve been collecting the last few weeks, and we’ll take questions live from our audience. If you have a question, you can submit it in the comments section below. Register
  • We also have a complete tutorial built on developing and Edge Computing Strategy. Subscribe to our Digital Learning Guide, “Creating an Edge Computing Strategy.”
  • Learn More:
    • VergeOS Atria, that enables you to Be the Cloud and Own the Edge
    • “The Three Requirements for Edge Computing“
    • “One Infrastructure for Edge and Core“

Filed Under: Edge Computing Tagged With: Blog, Edge

How to Repatriate Cloud Workloads

February 14, 2023 by George Crump Leave a Comment

The unpredictable cost of the Public Cloud and the difficulty in guaranteeing performance levels has many IT professionals trying to figure out how to repatriate cloud workloads or forcing them to re-examine their cloud migration strategies. There have been several high-profile examples of companies completely exiting public cloud services like Amazon AWS, Microsoft Azure, and Google Cloud. Maybe complete cloud repatriation isn’t required, but if you are to repatriate cloud workloads, we have a template you can use to help you make those decisions.

Why Repatriate Cloud Workloads?

The primary motivation for repatriating cloud workloads is to lower costs. Other concerns include guaranteeing performance and increasing security. Most organizations have not eliminated their data center, so they still have assets available to repatriate cloud workloads. From a hard cost perspective, on-premises infrastructure is far less expensive, especially if you can get five or more years of serviceability out of the investment. If you are going to use something for a long time, owning it is always better than renting it.

The public cloud model generally lures IT leaders with its promise of operational simplicity and eliminating hardware refreshes. Repatriating workloads means IT must learn to live with those two problems or look for a new operating environment.

Why Not Repatriate Cloud Workloads?

Moving a workload back on-premises isn’t the issue. There are plenty of methods to get your applications and data out of the cloud. The question is, what on-premises infrastructure will you use to host the repatriated applications?

The on-premises architectures are either the traditional three-tier architectures with a storage, virtualization, and networking infrastructure or a hyperconverged infrastructure (HCI) that claims to converge those three infrastructures. HCI, as we explain in our article “Move Beyond HCI to UCI” most HCI solutions don’t actually converge anything; virtualization, storage, and compute are three (or more) distinct code bases.

Both three-tier architectures and HCI suffer from inefficiency, complexity, and a lack of longevity. They don’t efficiently use the hardware resources that IT applies to them, which is a waste of IT budget dollars. Because of their multiple layers, these architectures are challenging to manage, patch, and upgrade. And their inability to integrate different hardware into the same operating environment requires IT to perform a storage or server replacement or refresh every four to five years.

Don’t repatriate your cloud workloads to the same environment that caused you to push those workloads to the public cloud in the first place. Use cloud repatriation as an opportunity to upgrade your data center to a private cloud.

Create a Private Cloud

To repatriate workloads, organizations need to take the best of the public cloud and combine it with the best of on-premises infrastructure. The goal should be to create a private cloud where the organization owns the infrastructure instead of renting it but has the serviceability and flexibility of the public cloud.

The problem is the term private cloud is a term that almost every infrastructure vendor, but their products don’t deliver the promise. A private cloud needs an infrastructure that can scale small enough to support Edge Computing use cases and large enough to support the most demanding enterprises. A private cloud is not a software-defined data center (SDDC). It is a software-defined organization where every data center and Edge location are part of the same operating environment. It should enable entire workloads, not just virtual machines, to move between IT locations at the click of a button and, when they arrive at the new location, be fully functional, including networking.

Delivering a private cloud that provides better-than-public-cloud operations requires a data center operating system that consolidates virtualization, storage, networking, and data protection into a single piece of software, creating an ultraconverged infrastructure (UCI). UCI simplifies operations at the primary data center location, remote data centers, and the Edge.

To learn more about making the private cloud a reality, register for our webinar “Infrastructures: Edge Computing and Private Cloud.”

VergeOS, The Way to Repatriate Cloud Workloads

VergeOS is the way to repatriate cloud workloads or keep workloads on-premises that you are considering moving to the public cloud. The ultraconverged solution includes virtualization, storage, networking, and data protection within a single piece of software. VergeOS enables you to create virtual data centers (VDC) and assign specific hardware resources to them so you can guarantee workload performance. It also, in most cases, leverages existing servers, even if they are a couple of years old. With VergeOS, you can also use VDCs as a method of IT delegation, creating secure, isolated environments and assigning them by a line of business, location, or function.

how to repatriate cloud workloads

In the Atria release of VergeOS, we’ve delivered the Recipe Marketplace, a catalog of preconfigured virtual machines, and even complete workloads with all the associated virtual machines, networking, storage, and data protection settings. In the initial release, we include recipes to set up a Docker Container environment, LAMP stack, and an Object Store, with more on the way. You can also create your own recipes for workloads common to your organization and use the marketplace to present them to groups to which you delegate VDCs.

The Atria release also includes Site Manager, a global mesh-like management framework that enables IT to manage multiple data centers and edge locations from a single interface. Site Manager is built directly into the VergeOS code base. It is not a separate piece of software. As a result, each location knows about the other locations and can report telemetry information to multiple points, eliminating the single point of management failure concern common in add-on solutions. With Atria, VergeOS creates an organization-wide ultraconverged infrastructure that simplifies IT and dramatically reduces costs.

how to repatriate cloud workloads

Conclusion

As Chris Evans at Architecting IT pointed out in his article “The Great Cloud Repatriation Debate – Compute,” it is not clear how many organizations are looking to repatriate cloud workloads. Still, there are plenty of organizations that slowed their cloud migration. There is also little doubt that there is concern over the public cloud model in terms of cost, guaranteeing performance, and security. The hesitation to move workloads back on-premises may be IT knowing that the environment they would repatriate those workloads is more complex and brittle than the public cloud because of legacy three-tier architectures and HCI. Armed with ultraconverged Infrastructure software like VergeOS, IT can provide a better-than-cloud experience for themselves and the organizations they serve.

Next Steps

Template: Get our free “Cloud Repatriation Template.”

Watch: Our in-depth LightBoard Video on Edge Computing and Private Cloud Data Centers.

Subscribe: To our Edge Computing Tutorial “Creating an Edge Computing Strategy.”

Watch: Our On-Demand webinar “Beyond HCI” for a comparison to HCI and a demonstration.

Filed Under: Blog, Private Cloud Tagged With: Blog

855-855-8300

Get Started

  • Versions
  • Request Tour
  • Test Drive

VergeIO For

  • VMware Alternative
  • SAN Replacement
  • Hyperconverged
  • Server Room
  • Secure Research Computing

Product

  • Benefits
  • Documents
  • Architecture Overview
  • Use Cases
  • Videos

Company

  • About VergeIO
  • Blog
  • Technical Wiki
  • License Agreement
  • Terms & Conditions

© 2023 Verge.io. All Rights Reserved.