Colocation – a bridge between the past and the future for legacy systems

Colocation – a bridge between the past and the future for legacy systems

Digital transformation is moving forward at full speed, though on several occasions, particularly in highly-regulated industries or those of broad track-records, it collides with an immensely powerful obstacle -i.e. legacy systems, which despite being on the verge of inefficiency and technological obsolescence, still provide a robust and reliable service.

Companies are not always ready to upgrade or replace these apps., many times due to the huge size of projects, many others because of associated costs, sometimes because of the risks involved, not to mention that technology investments have usually been excessive in previous years and cost amortization is required.

Similarly, cultural barriers still persist making companies keep some critical systems under control, despite the massive use of the cloud.

In these particular cases, colocation services can be used to gain agility and to create a perspective thinking ahead. Banking and financial services, oil and gas, mining, healthcare, manufacturing, and utilities are among the industries that meet these characteristics.

The intermediate step

Let’s remember that colocation allows an organization to host its mission-critical equipment in a high-availability data center. Just taking that single step brings multiple benefits: lower maintenance costs, reduced need for dedicated resources for internal infrastructure management, lower risks of failing to comply with security and energy consumption related regulations, and even business continuity is guaranteed in the event of a natural disaster or an unexpected situation.

However, for companies this could also mean a bridge between the past and future of their IT strategies, a turning point between owned infrastructures and hybrid cloud environments, an intermediate step on their journey to transformation.

Because colocation service represents in many cases the first step in the transformation and modernization project.

More fluent migration

Therefore, when migration is unavoidable due to obsolescence or because the market pushed with new demands and needs the old system can no longer solve, the direct connection with the cloud providers will be already established making transfer smoother and decreasing downtime.

The global colocation market adjusted to companies’ new needs and understood as a necessary steppingstone to the cloud, is gaining ground: according to the Statista statistics portal, the business moved just over US$50 billion in 2021 and will generate just over US$136 billion by 2028.

Digital transformation is advancing at full speed. And having tools in place like colocation service to remove obstacles is essential for companies to remain competitive thinking ahead.

Author:
Gabriel del Campo
VP Data Center, Cloud & Security
Cirion Technologies

Solving the unknown- the best solution for the “hybrid cloud” equation

Solving the unknown- the best solution for the “hybrid cloud” equation

Hybrid cloud continues to prevail in the market: according to IDC estimates, by the end of this year, it will be present in at least 90% of organizations worldwide.  The rationale for this trend is given by the significant role of the cloud in digital transformation strategies, as well as for the fluidity and flexibility levels proposed by this model.

“Hybrid cloud” as we call it, is a technological scenario that includes some local presence or owned infrastructure (private cloud) and a set of apps., data, and processes migrated to a public cloud. Standing out among its features is the fact that workloads move in a fluid, transparent, and automated manner between the different environments, as necessary. 

It is not a turnkey solution that all companies can replicate to obtain the same results, but a true equation that poses new unknowns in each case in particular. Clearing them is the key to achieving all the benefits that this type of architecture potentially provides. 

Network speed, processing capacity, architecture optimization, latency achieved, cost control, and overall governance stand out among the variables at stake.

User experience

Focus should be on user experience to find the right pathway. In any case it should be the one to meet users’ expectations no matter if they are buying a product online, making a financial transaction, completing work for the office, studying, or even playing or chatting with friend. Its geographical location or access device is not relevant either -experience must be analogous regardless of these factors.

Therefore, analyzing where information is processed, and response immediacy is key.  When it comes to workloads that require immediate response, reducing data transportation time to the processing point is crucial. On the other hand, because of their dynamics or functionality some workloads are better located in a core data center. Therefore, the architecture assembly and the decision where each load is processed, are essential to achieve greater performance.

Connectivity amongst these points is fundamental as well.  In a game of balance, we will have to explore multiple alternatives to identify those internet providers that offer good interconnection with their peers -to guarantee an appropriate performance of end-to-end architecture, optimally concatenating all hybrid cloud links and, at the same time, which enable cost effectiveness.

With an eye on latency

Throughout this path, the eye will be on latency — delays in data transfer must be minimal and performance must reach its maximum potential, even when dealing with demanding applications such as video, VR, or games.

On the other hand governance provides project with a business purpose aligned direction, assuring compliance with regulations and organizational policies, analyzing, and monitoring performance, assigning resources appropriately and establishing continuous improvement criteria, not only from a technology perspective but from people’s and processes’ as well. 

Eventually, hybrid cloud enables the best use of services and the greatest cost savings. But these benefits are not magical: to disentangle this equation a right understanding of the model is key, as well as a correct governance implementation, and of course accuracy at the time of math.

Author:
Gabriel del Campo
VP Data Center, Cloud & Security
Cirion Technologies

Edge Computing: Building faster and smarter networks

Edge Computing: Building faster and smarter networks

For companies in any industry, having computing as distributed as possible has become essential, that is to say, they must be able to process data as close to the place where they are produced, in order to make faster and smarter decisions about them.

According to a study performed by the consulting firm Grand View Research, Edge Computing global market will hit US$43.4 billion by 2027[1]. Along with this, Gartner projections indicate that by 2025, 75% of business data will be processed outside of conventional data centers or the cloud[2].

Both figures confirm a clear trend, consolidation of edge-based solutions and platforms.

We have been noticing a greater investment for connectivity in Latin America for years, though what really matters is not just reaching the main hubs but making all that infrastructure that is being developed as ubiquitous as possible, in order to reach the last locations where businesses and people need that connectivity.

Therefore, in addition to improving connectivity it is essential to make computing as distributed as possible, that is, companies must be able to process data as close to the points where they are captured, in order to act quickly on them. This implies building smarter networks, with processing capacity, and from a customer perspective, the possibility of not only connecting to these centers in different ways, but also the ability to run applications in the most appropriate way (physical servers, virtual machines, containers).

Thus, for any application, be it business, entertainment, or personal, latency becomes a critical element in defining response times, since the lower it is, the greater the ability to offer better services and a better user experience.

Now, the design of those applications to be deployed in edge solutions is key, since it must consider  the point where we will look for data, up to where they will be processed from the very beginning. This goes beyond basic hardware technology as it implies a synergistic integration of the architectures involved. Along with this, we must not forget that these sets of systems require highly trained human resources, so having trained human capital with the skills this type of industry calls for is critical.

Finally, it is essential to have all necessary internal processes to ensure the highest possible availability. High availability is a critical element when we think about this type of user experiences. We must have the necessary systems in place to prevent any impact on final customer’s service.

Considering the above, an Edge-based solution must be the optimal development of any application and transaction, especially because our customers’ data are involved, so from the operator’s standpoint, this technology must be implemented under the “each time closer, each time faster and also each time safer” premise.

As players in this industry, we are witnessing a strong and rapid adoption of edge-based solutions, both because supply is increasingly higher and vendors are strongly driving development, while customer demand is also growing.

This is how a perfect combination is set up where growing adoption is observed, and specific cases that will help transform solutions into an almost unimaginable reality today, and that ultimately will change the way we live and work in the future.

 

[1] https://www.grandviewresearch.com/industry-analysis/edge-computing-market 

[2] https://www.gartner.com/en/research/methodologies/magic-quadrants-research 

Jon Paul "JP" McLeary

Autor:
Gabriel del Campo
Regional VP of Data Center, Cloud & Security
Lumen, LATAM