It was a big day for the company, every function had planned
for many months. The last time around which was actually the first time, the
company had fallen flat amidst a huge uproar by customers who were left
dissatisfied and unfulfilled. Learning from the fiasco, they had planned
carefully looking at each and every component distributed among the three
streams which had to work together for collective success. This was their last
opportunity to redeem themselves; the team was on tenterhooks, they badly
wanted success.
D-day arrived; there
was complete harmony between them as if they had telepathic connectivity;
intelligence of the highest possible order ensuring that when one requested a
resource, it was made available within fraction of a second. Together they
ensured that no one was starved of any resources with what appeared to be
limitless elasticity. The datacenter team had solved the problem of planned and
unplanned demand peaks. They had learned to manage high variability and surge, and
now could think of life beyond work.
Who comprised the magic team that found the silver bullet
and solved the problems that every datacenter head and CIO struggle with ? Triad
of the most critical resources for any application to work: network, compute
and storage ! With a combination of Automation, Orchestration, Central
Governance, and Dynamic Workload Placement, the ultimate dream of a Software
Defined Datacenter – that is application aware and ensures that no resources
are ever constrained irrespective of workload – had been achieved.
The journey started almost two decades back with Server
Virtualization quickly evolving to allow spread from one server to going across
multiple boxes within the rack, then data center and finally the Cloud. With
unlimited resources running on commodity hardware, it’s now possible to react
to sudden and planned upsurges. Cloud providers have built capacity that
collectively can address the entire world’s needs on steady state. Economies of
scale have been achieved by many; the decision to build dedicated datacenters is
now an emotional and not rationale decision.
Software Defined Networks have evolved to allocate resources
based on workloads managing capacity to optimize. They are application aware
and prioritize traffic based on neural network based learning; the disruption
to existing hardware players had them acquiring some of the new innovative
startups. Finally hardware independent storage virtualization and automation brings
up the rear having evolved last and still facing some challenges of
interoperability threatening stranglehold of dominant players. Collectively
SDDC has arrived and gaining momentum !
Software Defined Datacenter is a complex architecture of
hardware, network, compute, storage, and unified software that is application
context aware working within the boundaries of defined SLAs and configured for
business. The abstraction layer thus defined, delivers everything as a measurable
software configurable service. The business logic layer is intelligent to
identify trends and constraints overcoming them in an interconnected world of
on premise datacenter and Cloud (compute, storage, and applications).
In contrast the current traditional datacenters focus on discrete
components – hardware and devices – which require individual configuration,
provisioning and monitoring. Depending on the maturity curve, enterprises have
adopted various parts of SDDC in their environment, whereas the mature service
providers have come closer towards the creation of the software defined
datacenter. Interoperability and transition between competing SDDC propositions
still remains a challenge with proprietary and still evolving software stack.
So where should IT and CIOs focus or place their bets ? Can
they create the magic described above within their datacenters ? Is it feasible
with permissible or marginally higher investments ? Which vendors should they
bet on and how to manage the existing stack of hardware and software ? Does it
make sense to just outsource the entire proposition and let someone else manage
the solution and associated headaches ? What about skills availability to
create and then manage the complexity that SDDCs bring ?
I believe that the answer lies with the existing scale and intricacy;
SDDC can bring high level of efficiency to large datacenter operations where
the variability of demand coupled with dynamic scale provisioning will bring
efficiency and potential cost savings over the long run. The other scenarios
that make a case are business with unpredictable load and variability (e.g.
ecommerce portals, news sites, stock exchanges, and social media). Smaller
companies can derive benefit from parts of the solution with high levels of virtualization.
Either way, start building components that can help you move
towards SDDC.
No comments:
Post a Comment