Often, IT leaders turn an eye toward the cloud when they’re looking to solve elasticity and scalability issues. One of the key differentiators of the cloud as compared to traditional infrastructure is that it’s inherently flexible. When it comes to applications such as eCommerce and big data analytics, the ability to scale up and down as demand changes can be a powerful capability and architects stand to improve performance and reduce costs.
Although the gains to be made are very real, moving a legacy application to the cloud isn’t generally as easy as just picking it up and moving it. Due to the completely new architecture that cloud services are predicated on, a rewrite of the application is often needed, and at a bare minimum, configuration changes will be required to ensure the application transitions smoothly. Of course, all this is very difficult, if not impossible, without a proper understanding of how the application works and its dependencies.
Data Center to Data Center Migrations
Moving data between data centers is a long-standing tradition in IT, whether that is replication for passive backups, secondary workloads, or geo-located users. However, as data became entire workloads inclusive of VMs and multiple volumes, these migrations became more and more complex. Moving entire workloads between data centers today requires not only the movement of data, but the servers and applications that consume and leverage that data as well. This requires an unprecedented amount of monitoring, if only to keep track of where everything is at any given time.
This has a compounding effect over time from a cost perspective, and is one of the principal reasons companies began looking at third-party infrastructure (clouds) years ago, as it was much more cost-effective to use a service provider than to invest the capital expense required to stand up a duplicate data center.
Workload Migration Challenges
Migrating workloads, especially complex, multi-tiered applications, can be a harrowing process. Undertaking the migration without a solid understanding of the interdependencies of applications and the exact resource requirements means almost certain failure. In today’s world, it’s understandable that failures can, do, and will happen, but what’s not acceptable today is downtime. Cutovers of migrations need to be instant, or an hour at most.
A failure of this migration endeavor can often be attributed to a bottom-up infrastructure focus - simply trying to mirror hardware available on-site. The downfall of most workload migrations is lack of visibility into the configurations of the actual application(s), and how it consumes resources from that infrastructure. With a proper understanding of how applications interact, it becomes much clearer what the requirements at the secondary site really are.
Mapping Application Dependencies
Understanding and documenting the relationship between all components of an application in extensive detail is vital to the success of a migration, as well as to managing and maintaining that application. Most organizations lack this level of insight, as documentation is almost instantly out of date the minute it is written, and deep understanding of the application often exists only as tribal knowledge in the heads of the administrators maintaining the application.
What happens to a critical application if there’s a problem and the administrators with that tribal knowledge are unavailable? Perhaps those individuals are busy, on vacation, hospitalized, retired; there’s an infinite number of reasons that tribal knowledge goes offline. Manually creating the workflows and processes necessary to avoid downtime or repair failures is still not enough in today’s IT world. These types of recoveries can be managed and monitored for with automation and workflows can be triggered automatically in the event of a failure, eliminating this vulnerability.
No one person should hold the keys to your digital kingdom, but let’s also not assume that there is even a single person that has the detailed knowledge necessary to migrate or restore application services. In a modern application-centric IT organization, you can use a tool to provide this insight into the interdependencies of applications for you.
Understanding Resource Requirements
When breaking down an application, you need to consider all the services that application provides and connects with to properly recreate the resources required to run it. Oftentimes, an application can be run in a degraded state temporarily while a migration is in process, or during a failover scenario. Said another way, you don’t always need the exact tier one hardware to stand up another copy of the application, but you definitely need in-sync data and access availability. Users are forgiving of a lack of speed more so than they are a complete lack of access.
So how do you measure the sheer breadth of an application?
• Services running within the app
• External providers of data to the app
• External consumers of data from the app
• Servers and OS versions required to run the app
• Storage systems required to host the app data
All of these components are baseline requirements when it comes to understanding the needs of your application and determining its portability.
At Uila we recently published a new book entitled The Gorilla Guide to … Application-Centric IT. In this free book, you’ll learn:
- The advantages of an application-focused approach to IT
- How application dependencies can simplify workload migration and resource planning
- Start the journey of developing a "full stack" mindset for managing applications
- What's new in Uila 2.4?
- Understanding VXLAN – 10,000-foot Overview
- What's new in Uila v2.2?
- 2019: The year to depend on “Dependencies”
- VDI Troubleshooting Tips
- Interpreting CPU ready and fixing problems caused due to CPU ready
- Microsegmentation and its Prerequisite
- VMworld 2018 - Highlights from another great experience
- Gain Infrastructure Efficiency By ‘Right-Sizing’ Cloud Resources
- Overcoming the Challenges of a Multi-Cloud Environment