Translate this website:
Search this website:


CitrixDesktop VirtualizationHyper-VI/O VirtualizationNetwork VirtualizationRedHatServer VirtualizationStorage VirtualizationVMware

Going The Last Inch – Virtualising the Desktop

By John Barco, head of product development, NComputing.

 

Date: 27 Feb 2012

Gartner's recent CIO survey found that CIO's budget predictions for 2011 are flat globally – with the exception of emerging IT models such as cloud based computing and enabling virtualisation environments. Tight budgets, demand to do and deliver more with less, and emerging technology ecosystems are forcing CIOs to re-imagine IT. Many organisations are investigating and piloting transformational technology such as Saas based services in a virtualised environment – but when budgets are really stretched and you still have to deliver best user experience in secure environment , how do you go that final step of the way to make sure that every last inch of the IT Strategy is covered?

Could the answer be found in virtualising the desktop? Is there more to this technology than just as an enabler to cloud computing? With upfront hardware costs reduced by 75%, maintenance by 75% and power consumption by 90%, and the zero client data integrity risk - it is starting to look like it could be an inch worth going to.

We’ve all become accustomed to the PC model, which allows every user to have their own CPU, hard disk, and memory to run their applications. But personal computers have now become so powerful that most users simply don’t need, in fact can’t possibly use, all the processing power. Desktop virtualization is a modern take on the time-honored concept where multiple users share the processing power of a single computer. This approach has several advantages over the traditional PC model, including lower overall costs, better energy efficiency, and simplified administration.

Over the past 30 years, PCs have changed the way we work, play, learn, and think about computer technology. From the first single-chip microprocessor in 1971 to the latest multi-core CPUs powering today’s PCs, users have come to rely upon owning and controlling their own processing power. In large part, the PC became successful because it took the processing power out of the data center and placed it directly on our desktops.

But with that desktop power and control also came responsibility— the responsibility to maintain, troubleshoot, and upgrade the PC when needed. After all, the PC is a machine and all machines need regular care. As PC buyers and users, we may have welcomed the capabilities and productivity increases the PC brought, but no one warned us that even with help from a dedicated IT department, we’d have to spend over 17 hours annually maintaining each PC.

IT services costs are trending upward with ever increasing software and support costs. Security, data privacy, manageability, uptime, space, power, and cooling challenges are driving many organizations to look for alternatives to the traditional distributed PC model. Thin clients faltered because they are still “too fat” with PC-like local operating systems (Windows XP Embedded, Linux, etc.), “full-power” processors, PC memory, local flash drives, virus vulnerabilities, and the management challenges associated with these components.

While the traditional PC market is not growing very fast, its enormous size continues to drive significant innovations such as multi-core processors. The result is that today’s PCs can outperform high-end servers of just a few years ago. This opens the door to a new age of virtual computing where the power of an everyday PC gets used as efficiently as possible by multiple users at once.

Since we’ve been putting PCs on people’s desks all these years, many have forgotten how computers worked before the PC came along. In pre-PC days, computing was done on mainframes—large boxes that sat in specially cooled rooms on raised floors—and they had connections to dumb terminals spread
out over the premises. This single centralized computer performed the processing for all of the users. Users also didn’t have to administer the box— that was the responsibility of the technicians of the day. If a user had a problem, all they had to do was call the computer room and ask for help, since support had to be centralized along with the computer.

Of course, the IBM System/360’s big disadvantages were cost ($133,000 for the entry-level model in 1965)2 and environmental considerations (space, power and cooling). It also required dedicated staff to support and maintain the system. People spent years training to understand and learn the tasks necessary to keep these systems running. This meant that the number of people qualified to maintain a System/360 was very small. This relegated the System/360 to large corporations, governments, and educational institutions. The next step was the minicomputer, which also used centralized resources, but at a much lower cost than a mainframe. With the arrival of the PC (and its close cousin, the PC-based server), mainframes fell out of fashion. Servers replaced mainframes in the data center and many were called upon to perform the same duty. This gave rise to the concept of server-based computing (SBC), which is like mainframe computing with a few minor differences. The dumb terminal is replaced by a PC that communicates with a server and receives a full screen interface that is transferred across the network. The most popular application of server-based computing has been to host a small subset of applications on a server that are accessed by a PC client in this way. In this case the PC is still used to run local applications in addition to running the server-based applications hosted with Citrix or Microsoft Terminal Services software. In some SBC installations, a slimmed-down version of a PC with a low-end processor and flash storage, called a “thin client” is used. With the thin client approach, most, if not all, applications are run on the server.

SBC was intended to provide the same advantages as mainframe computing while mitigating the cost and environmental factors, but it created a completely different set of disadvantages. These disadvantages include:
• Constrained user experience with limited desktop interface performance, especially when graphical applications are used.
• Expensive thin clients that are fundamentally still PCs and commonly require special customizations.
• Expensive, high-end server components.
• Complex setup and administration requiring network administrators with specialized skills.

 

So how can you get the benefits of SBC without its disadvantages. The answer is the fast emerging model of desktop virtualization which enables a single PC to simultaneously support two or more users – each running their own independent set of applications. The key to making this solution deliver a true PC like experience is when the three core components of the technology are optimized to work together: the software that virtualizes resources on the PC, the protocol that extends the user interface, and the client or access device. It is this high degree of optimization that allows desktop virtualization solutions to run on PC hardware (not just server hardware) and deliver all of the benefits of SBC without the drawbacks. This model has demonstrated its ability to extend computing access to a whole new set of users in schools and the developing world – thus creating a ‘new inch’ rather than a last inch; while slashing computing costs for small, medium and large businesses worldwide.
It also has a role in cloud computing as a sort of ‘last inch’. Cloud computing’s promise of lower costs, operational efficiencies and business agility are critical adoption factors for business of all sizes. For the small business, cloud computing offers a ‘level playing field’ with access to the type of applications and compute power that previously only the largest of Fortune 500 companies had access to. For large business, the role Cloud computing plays in enabling anywhere working, anywhere computing, speed of deployment, and the availability of new business architectures represent crucial strategic advantages.
There is arguably an inextricable link between virtualization and cloud computing. While most people only associate server virtualization with cloud computing, desktop virtualization has an important role to play. For example, India has been one of the most prolific adopters of desktop virtualization and is home to some of the world’s largest and, arguably innovative, cloud computing projects -- such as through a project with Employee State Insurance Corporation.
The Employee State Insurance Corporation (ESIC) is chartered by the Indian government to deliver insurance and healthcare benefits to over 20 million private- and public-sector employees. ESIC provides these services through over 2,000 facilities that include hospitals, dispensaries, and branch offices across India. ESIC’s IT systems were outdated and could not provide high-quality service to its stakeholders.
The organization turned to Wipro to develop and manage a fully networked, cloud-based IT system that would cost-efficiently leapfrog existing technologies and dramatically improve service delivery. Wipro designed a private cloud infrastructure that included a centralized data center and 31,000 NComputing virtual desktops.
Companies constantly look for paths to reduce IT costs and increase productivity and agility. Desktop virtualization is an increasingly attractive path for organizations of all sizes to consider with its promise of a cheaper, greener and more flexible solution for desktop computing access. The combination of low-cost, energy-efficient access devices along with software that lets IT centrally manage virtual user sessions is a powerful package that transforms the way companies deploy client computing services. It also represents an ideal fit for end-user client strategy for cloud computing environments.

The size and scale of opportunity for desktop virtualization is highlighted by recent re-forecast by Gartner of the move toward hosted virtual desktops It projected hosted virtual desktops to reach 74 million users by 2014 which represents 15% of the business desktop market. This is the tip of the iceberg. This number alone does not fully reflect the potential of business deployments because it only considers deployments for user groups of 250 or more as economically viable for virtualized desktops.

Moreover, the truly disruptive economics of virtual desktops when naturally priced - reducing all client related costs by 50% or more compared to traditional PCs - bodes well for mature corporate markets going through PC replacement cycles as well as emerging markets that do not have legacy models. In short, it will have an increasingly important ‘last inch’ role in the fast emerging transformational computing environments such as cloud, SaaS and web-based utility computing.


 

ShareThis

« Previous article

Next article »

Tags: Desktop Virtualization

Related White Papers

11 Jan 2012 | White Papers

The Infoblox VMware vCenter Orchestrator Plug-in by Infoblox

The ease and speed with which enterprises can deploy virtual machines has outpaced their ability to provide IP address services to them in a timely fashion. ... Download white paper

Read more White Papers»

Related News

29 Jul 2014 | Desktop Virtualization

25 Jul 2014 | Server Virtualization

24 Jul 2014 | Server Virtualization

23 Jul 2014 | Desktop Virtualization

Read more News »
Related Web Exclusives

28 Jul 2014 | Server Virtualization

  • The need for speed

    Data boom pressures operators to speed up deployment of new data centers – increased focus on turnkey prefabricated solutions. Read more

21 Jul 2014 | Desktop Virtualization

14 Jul 2014 | Server Virtualization

Read more Web Exclusives»

Recruitment

Latest IT jobs from leading companies.

 

Click here for full listings»

Advertisement