Overview of XenDesktop Components

We have made a number of posts on this blog discussing the value of Citrix XenDesktop and felt it was time to add a video to this topic. Sid Herron had written a previous post Minimum Requirements for XenDesktop that you might find helpful after watching this video. Between this video and Sid’s post you should have a basic idea of what you would need for a basic deployment of Citrix XenDesktop.

Take a few minutes to learn what is required and what would be optional in a XenDesktop deployment, as well as how all the pieces would interact.

Citrix Synchronizer and XenClient Demo

Over the past few months, we’ve made several posts about XenClient. But in case you haven’t read them, or you need to refresh your memory, XenClient is (quoting from Citrix here): “…a high-performance, bare-metal hypervisor that runs directly on the client device hardware, dividing up the resources of the machine and enabling multiple operating systems to run side by side in complete isolation.”

Of course, there are other ways to run multiple operating systems side by side on a client device, although they may not give you the level of performance that XenClient – because of its small footprint – brings to the table. The tricky part is figuring out how to manage that environment once the user unplugs the laptop from the network and takes it on the road. How do you patch it? How do you back up user data? What do you do if the laptop is lost or stolen? If one of the OS instances is corrupted, or accidentally deleted, how do you get it back?

That’s the job of the Citrix Synchronizer – a virtual appliance that runs back in your data center and communicates with your XenClient-equipped laptops securely (via SSL) over the Internet. But rather than try to describe to you in detail exactly how that all works, it’s probably easier to simply show you. So take a few minutes to watch our own Steve Parlee demonstrate the interaction between Synchronizer and XenClient.

And Just to Prove the Point…

Monday, I wrote a post about some of the latest trends in cyber crime.

Tuesday afternoon, our Web site was hacked.

We didn’t realize it until we landed on the Google blacklist this morning, although I should have suspected something when I noticed, on Tuesday afternoon, that both of our two instances of WordPress – the one that powers this blog, and the one that powers our “News” page – had stopped working. But, since I knew that I was a couple of revisions behind, I elected to upgrade my WordPress instances to the latest release. When they came back up working again, I didn’t probe any deeper. I should have known better.

Log analysis indicates that our FTP account was compromised. Beginning at about 3:18 pm PDT on Tuesday afternoon, a series of files were uploaded to our server from an IP address that appears to be located somewhere in the UK (in the London area, to be more precise). The file transfers were done using the FTP account for our domain. They went through our site and changed every index.* page. Specifically, they placed a “hidden iframe” immediately following the <body> tag.

For those who aren’t conversant with HTML, you can think of an “iframe” as a window on a Web page that displays content from another Web page. Except that, in this case, the height, width, and border width of that window were set to “0.” The point being that when your browser loaded the page from our site, it would also load the content from the other site, but it wouldn’t be visible on the page. That content was, no doubt, some kind of malware that was intended to do something bad to your system. The hidden iframe attack is one of the most common exploits out there, and is typically used for some kind of “drive by” malware distribution campaign where the bad guys try to place their hidden iframe on as many legitimate sites as possible. When you visit the site, your browser fetches the code, and now it’s a matter of how good the defenses are on your PC.

Obviously, we’ve changed the FTP account credentials. But, frankly, we’re still not sure how the account was compromised in the first place. It was a pretty strong password, and not one that you’d expect to fall victim to a dictionary attack. We’ve been running malware scans on the machines that we normally use when we work on the Web site, and have yet to come up with a “smoking gun” that would explain how the credentials were compromised.

So…what to take away from this? First of all, it’s no fun to become a statistic. Second, nobody is immune to this sort of thing. Even the CBS News Web site was hit by an iframe attack not that long ago. Nobody is too big or too small to be targeted. Third, change your passwords regularly, even if you think you have strong ones. Fourth, be suspicious when something unusual happens. I should have dug deeper Tuesday afternoon, but it was late in the day when it happened, and I settled for what looked like an easy fix. Finally, it’s a pain in the you-know-what to go through and clean up the aftermath of something like this. It’s cost me most of today, plus we’ve been on the Google blacklist all day and probably won’t come off of it until sometime tomorrow when they’ve had time to re-scan our site.

The bad guys are out there, and they do want your stuff. Be careful.

Causes and Costs of Cyber Crime

I read a couple of items today about security and cyber crime that I found rather interesting. One was an article that came out a week ago on infoworld.com about the “First Annual Cost of Cyber Crime Study,” conducted by Ponemon Institute. The study involved 45 midsize and large organizations, ranging in size from 500 to more than 105,000 employees. They represented a mixture of industries and government agencies. The study revealed that cyber crime cost these organizations an average of $3.8 million dollars per year…each. The reported costs ranged from a low of $1 million to a high of $52 million per year.

The reported costs represent the direct cost of coping with attacks, including such things as, for example, the amortized annual cost of a Web application firewall purchased to respond to an attack on a Web application. They also included the time spent responding to attacks, the cost of disruption of business operations, lost revenue, and the destruction of assets. They found that it took an average of 14 days to respond to a successful cyber attack, at an average cost of over $17,000 per day.

Admittedly, a sample size of 45 companies is relatively small. But still – $3.8 million per year, average? Holy smoke!

The other piece of light reading will help to flesh out the picture and add some perspective. It’s the 2010 Data Breach Investigations Report conducted by the Verizon RISK team, in cooperation with the U.S. Secret Service. It combines data from Verizon’s 2009 case load with additional data contributed by the USSS to form a data set that spans six years and over 900 security breaches, representing over 900 million compromised records. About two-thirds of the breaches covered in the report have either not yet been disclosed, or never will be.

While the cases worked by the USSS more frequently involved insiders, in Verizon’s own cases, almost all data stolen in 2009 – 98% – was the work of criminals outside the victim organization. 85% of that data was stolen by “organized criminal groups.” For a definition of “organized criminal groups,” see Appendix A of the report…it’s pretty interesting reading in and of itself.

Not surprisingly, financial services organizations were most frequently targeted (33% of cases), for the same reason Willie Sutton robbed banks: that’s where the money is. But you may be surprised to learn that the hospitality industry wasn’t that far behind (23% of cases), followed by retail (15% of cases). And here are some other things that might surprise you (note that the following percentages add up to more than 100%, meaning that some cases involved more than one factor):

  • 48% of breaches involved “privilege misuse” (that’s up 26% from the year before). The report defines this as any use of resources or privileges in a manner contrary to that which was intended, whether malicious or non-malicious. This category includes obvious actions such as embezzlement or deliberate theft of information by an insider, but also losses that resulted from abuse of system access, use of unapproved devices, violations of an organization’s Web or Internet use policy, abuse of private knowledge, use of unapproved software or services, unapproved changes and workarounds, and violations of an organization’s asset / data disposal policy.
  • 40% resulted from hacking (down 24%) – the majority of which involved either the use of stolen login credentials, or SQL Injection attacks. A fair number also involved exploitation of default or guessable credentials (or cases where no credentials were required), and brute force and dictionary attacks.
  • 38% utilized malware (unchanged)
  • 28% employed “social tactics” (up 16%) – using deception (spoofing, phishing, forgery), manipulation, intimidation, bribery, extortion, etc., as a means of breaching an organization’s security. Social tactics are often combined with other categories, for example, malware designed to look like antivirus software.
  • 15% were physical attacks such as theft, tampering, and surveillance (up 6%)
  • And what may be the most astounding finding of all: “…there wasn’t a single confirmed intrusion that exploited a patchable vulnerability.” Does that mean you don’t have to pay attention to patching your systems? No, of course not. But what it means is that just because you are current on all of your patches it doesn’t mean you’re safe!

Here are some more commonalities in the attacks:

  • 98% of all data breached came from servers.
  • 85% of attacks “were not considered highly difficult.”
  • 61% were discovered by a third party(!)
  • 86% of victims had evidence of the breach in their log files(!!)
  • 96% of breaches were avoidable through simple or intermediate controls.
  • 79% of the victims that were subject to PCI/DSS regulations had not achieved compliance with the regulations. Admittedly, that means that 21% had achieved compliance, and were breached anyway, but why stack the deck against yourself? If you’re subject to the regulations, make sure you’re in compliance.

So what are the takeaways from all of this data? Although I would encourage you to download and read all 66 pages of the Verizon report, here are a few points to consider:

  • 86% of victims had evidence of the breach in their log files, yet 61% of the breaches were discovered by a third party. That suggests that, just maybe, we should be paying more attention to our log files. Now, I understand that there aren’t many cures for insomnia that are better than trying to parse through several servers worth of log files looking for anomalies. But that’s why there are automated tools these days that will do that for you.
  • SQL injection has been around for over ten years, and still causes a large number of data breaches. Here’s a high-level example: you have a form on your Web site that is intended to capture user input and stuff it into a SQL database. Maybe it’s the billing information for your on-line shopping cart. But instead of entering the data you’re expecting, an attacker enters a SQL language statement that’s intended to either extract data from the database, modify data in the database, or deliver malware to the system.

    You can’t fix this by applying a patch, modifying a setting, or changing a Web page. It’s almost always an input validation failure. That means you have to fix the code behind the application so that it actually validates that the information that’s being typed into a field is really the kind of information that’s expected. It isn’t necessarily easy, and it isn’t necessarily inexpensive. But data loss isn’t cheap, either.

  • The use of stolen credentials was the top hacking method used. Two-factor authentication (e.g., RSA’s SecurID), which can largely render stolen credentials useless, has been around for years. Apparently not enough organizations are using it.
  • One of the more interesting (to me, anyway) recommendations in the Verizon report is to filter outbound traffic. That way, even if malware does get in the door, you have some measure of control over what information leaves your network. This is sometimes referred to as “Data Loss Prevention,” or “Content Security.” Here’s what they had to say about it:

    Most organizations at least make a reasonable effort to filter incoming traffic from the Internet. This probably stems from a (correct) view that there’s a lot out there that we don’t want in here. What many organizations forget is that there is a lot in here that we don’t want out there. Thus, egress filtering doesn’t receive nearly the attention of its alter ego. Our investigations suggest that perhaps it should. At some point during the sequence of events in many breaches, something (data, communications, connections) goes out that, if prevented, could break the chain and stop the breach. By monitoring,understanding, and controlling outbound traffic, an organization will greatly increase its chances of mitigating malicious activity.

    By a happy coincidence, one of our primary vendor partners, WatchGuard, recently introduced a line of appliances that are specifically designed for precisely this task. I’ll be writing more about that in a future post.

  • Don’t assume that you’re too small to interest the criminals. 9% of the breaches were in companies with ten or fewer employees. Another 18% in companies with 11 to 100 employees. 23% in companies with 101 to 1,000 employees.

And, finally, don’t assume that the situation is hopeless. Remember that only 4% of breaches were judged to have required difficult and expensive measures to avoid. To quote from the conclusions of the Verizon report, “Configuration changes and altering existing practices fix the problem(s) much more often than major redeployments and new purchases.” We do have the tools to get the job done. We just have to make up our minds to do it.

SAN Tips – Storage Repository Design

Back with another ManageOps video for your viewing pleasure. In this installment, our own Steve Parlee, ManageOps’s Director of Engineering, talks about SAN storage repository design concepts, and the effects your design choices have on things like snapshots, disk usage, and overall performance. In the process, you’ll also learn what we consider to be “best practice,” and some of the reasons why. As always, your comments will be appreciated. Enjoy!

iSCSI vs. Fibre Channel

We’ve had several posts here about storage virtualization (a.k.a. SANs) and the role that storage virtualization plays in both server and desktop virtualization. We made the decision some time ago to promote iSCSI SAN products rather than fibre channel, primarily because iSCSI uses technologies that most IT professionals are already familiar with – namely Gigabit Ethernet and TCP/IP – whereas fibre channel introduces a whole new fiber optic switching infrastructure into your computing environment, together with the new skills required to manage it.

But there are many who maintain that, although a fibre channel SAN infrastructure may be more expensive, and may require a different set of skills to manage, it offers superior performance.

So I was particularly interested to run across an article by Greg Shields on techtarget.com entitled “Fibre Channel vs. iSCSI SANs: Who Cares?” I would encourage you to click through and read this article in its entirety, although you may have to register and give up your email address to do so. But here are a couple of tidbits from the article to whet your appetite:

iSCSI vs. Fibre Channel: Who cares?
The answer: Statistics suggest that it doesn’t really matter…In most real-world scenarios, the performance difference between Fibre Channel and iSCSI SANs is negligible. Partisans will extol the raw performance statistics of their favorite SAN type, but it’s fantastically difficult to translate raw performance specifications into real-world user experience…

Performance alone may not be a decisive factor, but a SAN’s ease of administration can be. The management tools and techniques for Fibre Channel and iSCSI storage infrastructures are substantially different…the skills and experience required to run a Fibre Channel storage infrastructure are difficult to come by – often requiring additional consulting support for most implementations to start correctly. On the other hand, iSCSI SANs lean heavily on the existing TCP/IP protocol. If you have network engineers in your environment, they probably possess most of the necessary skills to successfully manage an iSCSI storage infrastructure.

So, while I would once again encourage you to read Greg’s post in its entirety (so you can assure yourself that I’m not quoting him out of context), I must say that I find his comments gratifying, because they tend to reinforce our own conclusions: unless you already have a fibre channel SAN infrastructure, there’s no compelling reason not to go with an iSCSI solution, and several reasons in favor of doing so, including cost and simplicity of management.

Anybody out there disagree? And, if so, can you tell me why, exactly, you feel that fibre channel is superior?

Citrix Has Your Back – Again

I just read an interesting blog post over on ZDnet, entitled The Changing Face of IT: Five Trends to Watch. As I read through the article, I was struck by how Citrix solutions can enable IT organizations to deal with these trends. Consider:

  1. The consumerization of IT – “Workers are bringing their own laptops and smartphones into the office and connecting them to corporate systems. More people than ever are telecommuting or working from home for a day or two a week. And, the number of Web-based tools has increased dramatically…”

    Yep. In fact many companies are instituting “BYOPC” (Bring Your Own PC) policies, because in the long run it can be less expensive to give employees a fixed allowance and allow them to buy whatever they want than it is to issue – and maintain – a company-owned laptop. Citrix themselves instituted this policy a few years ago.

    If you’re using XenApp or XenDesktop to provide access to your key line-of-business applications, you don’t care what the endpoint is. If your employee prefers a MacBook, fine. Want to use an iPad? No problem. Connecting in from your home PC because your kids are sick? We’ve got that covered, too. Just install the Citrix Receiver and you’re good to go.

  2. The borderless network – “…today’s IT security model is more about risk management than network protection. Companies have to identify their most important data and then make sure it’s protected no matter who’s accessing it and from wherever and whatever device they’re accessing it from.”

    Citrix likes to say that their products are “Secure by Design,” meaning that security is built into them from the ground up. First of all, when you’re accessing your virtual desktop remotely, or running a published application from a XenApp server, the data never leaves the data center. The remote endpoint (whatever it is) is just sending keystrokes and mouse movements to the data center and getting back pixel updates. On top of that, we can encrypt that data connection using the Citrix Access Gateway.

    Citrix also gives you very granular control over whether files can be copied between client and server, and/or whether print jobs can be directed to a client-attached printer. In fact, using Advanced Access Control policies, those controls can be context-sensitive, i.e., you might allow files to be copied to the client device if the client device is a company-owned laptop, but not if it is a home PC; or you might allow client-attached printing if the client is connecting from a branch office, but not if the same user, using the same client device, is connecting from home, or from a hotel.

  3. The cloudy data center – Let me go on record as saying that the most cloudy thing about the cloud is trying to understand what someone means when they say the word. Not unlike the word “portal” a few years ago, the first question that usually needs to be asked in any discussion about cloud computing is: “When you say ‘cloud,’ what exactly do you mean?”

    But the point to remember is that when you’re delivering applications via Citrix, users don’t know and don’t care where the data center is or where the applications are being executed. It doesn’t matter. Want to move your entire infrastructure to a co-lo? Fine. Want to have multiple data centers with automatic failover from one to the other? We can do that, too. By some definitions of the term, we’ve been building “private clouds” since the release of WinFrame back in the mid-90s.

  4. The state of outsourcing – “Outsourcing is thriving in many different forms, and it’s reasonable to expect that it will accelerate.”

    We made the point above that users don’t know and don’t care where the data center is. The fact is, for about 90% of what they need to do, neither do the administrators. Virtualization in general, and Citrix products in particular, make it very easy to administer, troubleshoot, and repair issues remotely. We built the entire Evans Fruit Company infrastructure without ever having our engineer set foot on site. In fact, actually dispatching an engineer to a customer location is now the exception rather than the rule.

  5. The mobilization paradigm – “While PCs still make sense on the desks of knowledge workers, for all of these other workers who regularly move around as part of their daily job, the stationary PC often changes the natural flow of their routine because they have to stop at a system to enter data or complete a task. That’s about to change. Mobile computers in the form of smartphones and touchscreen tablets (like the iPad) have taken a big leap forward in the past four years. They are instant-on, easy to learn because of the touchscreen, and they have a whole new ecosystem of applications designed for the touch experience…”

    Very true…but these same users are going to still need to access your traditional line-of-business applications, which will not be transformed overnight into touchscreen enabled apps. It is axiomatic that, in IT, nothing ever actually goes away – instead, new technology just gets layered over the top of old technology…which is why you’ll still find applications running on big mainframes in a lot of enterprises. So how do you manage that transition?

    Once again, Citrix comes through. There’s a Citrix Receiver for the iPhone, one for the iPad, one for Windows Mobile phones, one for the Android, and just a couple of months ago, Citrix released a version of the Receiver for BlackBerry devices. And, of course, Receivers for Windows, Mac, and Linux PCs have long been available. I don’t know of any other product or technology that offers this kind of flexibility in delivering applications to users regardless of location, connection, or endpoint device.

  6. So a big “Thank you!” to Jason Hiner for an excellent post. You’ve just described, in a nutshell, why ManageOps is still excited to be a Citrix partner after all these years. Just remember, as you work to adapt to all of these trends that are indeed changing the IT landscape, we’ve got your back.

Just Sign the Check Right Here – We’ll Fill In the Amount Later

Back in the old days of minicomputers and mainframes, we used to joke about IBM’s ability to, for all intents and purposes, get the customer to sign a blank check. They were better than anybody I’ve ever seen at getting people to commit to a solution when they really had no idea what the ultimate cost would be – and they were successful because of another cliche (which became a cliche because it was so accurate): “Nobody ever got fired for buying from IBM.” The message was basically, “Yes, we may be more expensive than everybody else, but we’ll take care of you.”

For the most part, those days are long gone, which made it all the more amazing to me to read that VMware is adopting per-VM licensing for most of its management products.

The article nails the basic problem with this licensing approach:

You know how many processors you have on a system, and that’s a fixed number. But the number of VMs on one host — let alone throughout your entire infrastructure — is regularly in flux. How do you plan your purchasing around that? And how do you make sure you don’t violate your licensing terms?

Hey, it’s easy – you just let VMware tell you what to put on your check at the end of the year:

You estimate your needs for the next year and buy licenses to meet those needs. Over the course of those 12 months, vCenter Server calculates the average number of concurrently powered-on VMs running the software. And if you end up needing more licenses to cover what you used, you just reconcile with VMware at the end of the year.

And, before you ask, no, you don’t get money back if you use fewer licenses than you originally purchased.

Sounds to me like a sweet deal – for VMware.

By comparison, the most expensive version of XenServer is $5,000 per server (not per processor, not per VM), and all of the management functionality is included. And the basic version of XenServer, which includes live motion, is free, and still includes the XenCenter distributed management software. (Here’s a helpful comparison chart of which features are included in which version of XenServer.)

A number of years ago, I attended a seminar that discussed the product adoption curve, and how products moved from the “innovation” phase to the “commodity” phase. The inflection point for a particular market was referred to as the “point of most” – where most of the products met most of the needs of most of the customers most of the time. When this point is reached, additional feature innovation no longer justifies a premium price.

The fact is that XenServer and Hyper-V are rapidly achieving feature parity with VMware. If we haven’t reached the “point of most” yet, we certainly will before much more time goes by. So even if you have a substantial investment in VMware already, at some point you have to re-examine what it’s costing every year, don’t you? Or are you OK with just signing a check and letting them fill in the amount later?

Is Office 2010 Worth It?

Every time Microsoft releases a new version of Office, we all have to ask ourselves whether there is enough business value in the new and improved version to justify the time and effort of rolling out the upgrade, listening to our users complain about the things that may not work the way they used to, and helping them through the rough spots.

Since ManageOps is a Microsoft Partner, we don’t have to pay for the Office licenses we use internally. Moreover, it’s important for us to actually use the technology that we’re promoting to our customers, so that’s another reason for us to upgrade. Even so, it costs us time and effort to upgrade everybody, and we have other critical applications that depend on Office – like the Word merge app that allows us to print quotes and sales orders from our MS-CRM records – so we have to make sure that those dependencies don’t get broken. So, like you, we have to ask, “Is it really worth it? Is there that much difference between Office 2007 and Office 2010?”

Well, actually there’s more than you might think, and J. Peter Bruzzese wrote an article about it over on infoworld.com earlier this week. Here’s just a quick bullet list of his “top 25” new Office 2010 features. If any of them catch your eye, I’d encourage you to read his article for a more detailed description:

  1. Universal ribbon – the ribbon interface is now part of every Office application.
  2. Customizable ribbon – don’t like the defaults? Customize it.
  3. Backstage view (behind the “File” tab of an application)
  4. Paste preview
  5. Office Web Apps
  6. Protected View
  7. More themes
  8. Insert a screenshot
  9. Crop images to a shape from within the app
  10. New photo-editing options in Word
  11. Navigation pane in Word
  12. “Sparklines” (Excel)
  13. “Slicers” (Excel)
  14. 64-bit support, which allows for Excel workbooks larger than 2 Gb
  15. Video editing from within PowerPoint
  16. Broadcast slideshows (PowerPoint)
  17. Distribute slideshows as video (PowerPoint)
  18. Animation painter (PowerPoint)
  19. Sections (PowerPoint)
  20. Transition improvements (PowerPoint)
  21. Outlook conversation view
  22. Outlook MailTips
  23. Outlook Social Connector
  24. Outlook “quick steps”
  25. Outlook “Clean Up”

So, a tip of the antlers to Mr. Bruzzese for coming up with a great list. Again, if any of these catch your interest, I’d encourage you to read more about these features in the InfoWorld article.

The Trade Up Is Dead – Long Live the Trade Up!

As we told you many times, and in many ways, the special Citrix XenDesktop Trade-Up promotion ended on June 30. However, as we expected, Citrix has announced a new trade-up promotion. So there is still a migration path from XenApp to XenDesktop, although (as we also expected) it will cost you more than it would have had you acted before June 30.

You can still get the two-for-one deal if (1) your Subscription Advantage is current, and (2) you trade up all of your XenApp licenses.

Citrix has also extended the trade-up offer to customers who own XenApp Fundamentals (a.k.a. Access Essentials), which is great news. Under the earlier promo, these customers would have had to upgrade to XenApp Enterprise first, and then trade up to XenDesktop. Now they can trade up for the same price as customers who own XenApp Advanced Edition (although the two-for-one deal is not available for XenApp Fundamentals).

Here’s the pricing matrix for the new promo, which will run through December 31, 2010 (click graphic to view full size):

XenDesktop Trade Up Pricing, July 1 - Dec 31, 2010
XenDesktop Trade Up Pricing, July 1 - Dec 31, 2010