InterOp Las Vegas 2014

interop1

I spent the day at InterOp Las Vegas 2014 and for me it was “Geek Heaven”.  InterOp has always been one of my favorite technology trade shows, and this year didn’t disappoint. I have been told I don’t appear “geeky” but I am indeed a true geek through and through!  Walking the isles for hours absorbing all the latest technology is pretty good entertainment for any geek. Today I was impressed with technology from many vendors but there were a few standouts. Pogo Linux in partner with Sandisk is building some very impressive performance into several Storage Area Network products including Nexenta and OS Nexus QuantaStor. If you are in the market for a very high performance “Software Defined Storage” solution these are excellent choices that will outperform the big boys at a fraction of the cost. I spent some time in the Suse Linux booth chatting about their Open Stack Cloud products and promised to spin up a test of their solution when I return home. Stratus Technologies was another product that I believe is continuing to evolve with their recently acquired everRun High Availability/Fault tolerant server technology. I have worked with everRun since the 90’s and its always been a very impressive way to keep workloads running even when multiple components or even a whole server fails. This is done by using a slick mirrored server instance with two Virtual Machines running software in “lockstep”, which means the server Operating Systems and application software are running simultaneously on both servers, if one fails the surviving host simply keeps humming along like nothing happened. Of course top vendors Citrix, Cisco, HP, Dell, and many others were there in force. Interop was founded 26 years ago as a workshop on TCP/IP and has a rich history as a conference to raise the skills of technology professionals.

Great Windows 8.1 Experience!

Win8

Today I had a truly great Windows 8.1 experience! I know some might be skeptical, and I for one felt Microsoft faced some challenges with user acceptance of Windows 8. But I am a big fan of Windows 8 primarily because it provides a multi-computer experience in one device. My “truly great Windows 8.1 experience” came while setting up a new laptop. We all dread setting up or refreshing a laptop because historically it’s been difficult and time consuming to transfer files and settings. But it’s a new day for Windows, and transferring all of my settings, metro apps, and data was as simple as logging into my Microsoft Live account and answering a few questions. First question after providing my live credentials was to enter my wireless security code, second question was “we found this computer on your network that belongs to you, do you want to copy the settings to this computer” and BAM all of my settings and data began streaming to my new PC. This was a truly great Windows 8.1 experience!

Fujitsu Ultrabook First Impressions

U904

Last night I had the opportunity to configure a new Fujitsu U Series Ultrabook. I like my computers like I like my cars, sleek and fast, and this U904 Notebook delivers that and more! This system is carved out of a single slab of titanium and is ultra-light and ultra-thin. The multi-touch 14” screen is a blast to use and creates a whole new experience when using Windows 8.1. This system is hands down one of the sleekest systems I have used, and has plenty of power for demanding applications.  I would highly recommend you give Fujitsu a look the next time you are in the market for a quality notebook.

So You Want to Be a Hosting Provider? (Part 3)

In Part 1 of this series, we discussed the options available to aspiring hosting providers:

  1. Buy hardware and build it yourself.
  2. Rent hardware and build it yourself.
  3. Rent VMs (e.g., Amazon, Azure) and build it yourself.
  4. Partner with someone who has already built it.

We went on to address the costs and other considerations of buying or renting hardware.

Then, in Part 2, we discussed using the Amazon EC2 cloud, with cost estimates based on the pricing tool that Citrix provides as part of the Citrix Service Provider program. We stressed that Amazon has built a great platform for building a hosting infrastructure for thousands of users, provided that you’ve got the cash up front to pay for reserved instances, and that your VMs only need to run for an average of 14 hours per day.

Our approach is a little different.

First, we believe that VARs and MSPs need a platform that will do an excellent job for their smaller customers – particular those who do not have a large staff of IT professionals, or those who are using what AMI Partners, in a study they did on behalf of Microsoft, referred to as an “Involuntary IT Manager” (IITM). These are the people who end up managing their organizations’ IT infrastructures because they have an interest in technology, or perhaps because they just happen to be better at it than anyone else in the organization, but who have other job responsibilities unrelated to IT. Often these individuals are senior managers, partners, or owners, and in nearly all cases could bring more value to the organization if they could spend 100% of their time doing what they were originally hired to do. Getting rid of on-site servers and moving data and applications to a private hosted cloud will allow these people to regain that lost productivity.

Second, we believe that most of these customers are going to need access to their cloud infrastructure on a 24/7 basis. Smaller companies tend to be headed by entrepreneurial people who don’t work traditional hours, and who tend to hire managers who also don’t work traditional hours. Turning their systems off for 10 hours per day to save on run-time costs simply isn’t going to be acceptable.

Third, we believe that the best mix of security and cost-effectiveness for most customers is to have a multi-tenant Active Directory, Exchange, and SharePoint infrastructure, but to dedicate one or more XenApp server(s) to each customer, along with a file server and whatever other application servers they may require (e.g., SQL Server, accounting server, etc.). This is done not only for security reasons, but to avoid “noisy neighbor” problems from poorly behaved applications (or users).

In ManageOps’s multi-tenant hosting infrastructure, each customer is a separate Organizational Unit (OU) in our Active Directory. Each customer’s servers are in a separate OU, and are isolated on a customer-specific vLAN. Access from the public Internet is secured with a common Watchguard perimeter firewall and a Citrix NetScaler SSL/VPN appliance. Multi-tenant customers who need a permanent VPN connection to one or more office locations can have their own Internet port and their own firewall.

We also learned early on that some customers prefer not to participate in any kind of multi-tenant infrastructure, and others are prevented from doing so by security and compliance regulations. To accommodate these customers, we provision completely isolated environments with their own Domain Controllers, Exchange Servers, etc. A customer that does not participate in our multi-tenant infrastructure always gets a customer-specific firewall and NetScaler, and customer-specific Domain Controllers. At their option, they can still use our multi-tenant Exchange Server, or have their own.

Finally, we believe that many VARs and MSPs will benefit from prescriptive guidance for not just how to build a hosting infrastructure, but how to sell it. That’s why our partners have access to a document template library that covers how to do the necessary discovery to properly scope a cloud project, how to determine what cloud resources will be required and how to price out a customized private hosted cloud environment, how to position the solution to the customer, how to write the final proposal, how to handle customer data migration, and much, much more.

We believe that partnering with ManageOps makes sense for VARs and MSPs because that’s the world we came from. Our hosting platform was built by a VAR/MSP for VARs/MSPs, and we used every bit of the experience we gained from twenty years of working with Citrix technology. That’s the ManageOps difference.

February 2014 ManageOps Partnerships

In February 2014 ManageOps is off to a great New Year, signing four new partners to its Cloud Hosting Partner Program. These partners include:

Professional Integrations of Tustin, CA.

Skypoint Consulting of Chicago, IL.

VDI Space of Incline Village, NV.

dvaData Storage of Kirkland, WA

Dan Velando, president of dvaDataStorage talked about their partnership with ManageOps in a recent press release. “The partnership with ManageOps allows us to complement what we provide and give us the ability to deliver a virtual cloud solution to our customers. Customers now have the ability to break the never-ending cycle of on-premise server hardware and software upgrades, and move all of their data and applications to a secure, cloud-based infrastructure that can be accessed over any Internet connection allowing them to focus on their business and leveraging our expertise to help grow their companies.”

A collaboration with ManageOps ensures customers the technology running in a business becomes almost invisible to its users. By becoming a partner, you can keep your current in house or managed services customers who want to move into a cloud based system without having to build your own environment. To learn about our partner program please visit www.ManageOps.com/Partners.

So You Want to Be a Hosting Provider? (Part 2)

In Part 1 of this series, we talked about the options available to prospective hosting providers, and specifically about the costs of purchasing your own equipment. In this post we’re going to drill down into the costs of building a Citrix Service Provider hosting infrastructure on Amazon.

Amazon has some great offerings, and Citrix has spent a lot of time lately talking about using the EC2 infrastructure as a platform for Citrix Service Providers. There was an entire breakout session devoted to this subject at the 2014 Citrix Summit conference in Orlando. Anyone who signs up as a Citrix Service Provider can get access to a spreadsheet that allows you to input various assumptions about your infrastructure (e.g., number of users to support, assumed number of users per XenApp server, number of tenants in your multi-tenant environment, etc.) and calculates how many of what kind of compute instances you will need as well as the projected costs (annualized over three years). At first glance, these costs may look fairly attractive. But there are a number of assumptions built into the cost model that should make any aspiring service provider think twice:

  • It assumes that you’ve got enough users lined up that you can get the economies of scale from building an infrastructure for several hundred, if not thousands, of users.
  • It assumes that you’ve got enough free cash to pay up front for 3-year reserved instances of all the servers you’ll be provisioning.
  • It assumes that, on average, your servers will need to run only 14 hours per day. If your customers expect to be able to work when they want to work, day or night, this will be a problem.
  • It assumes that you will be able to support an average of 150 concurrent users on a XenApp server that’s running on a “Cluster Compute Eight Extra Large” instance. Anyone who has worked with XenApp knows that these assumptions must be taken with a very large grain of salt, as the number of concurrent users you can support on a XenApp server is highly dependent on the application set, and doesn’t necessarily scale linearly as you throw more processors at it.

If all of these assumptions are correct, the Citrix-provided spreadsheet says that you can build an EC2 infrastructure that will support 1,000 concurrent users (assuming 10 customers with 100 users each for the multi-tenancy calculation) for an average cost/user/month of $45.94 over a three year period. But that number is misleading, because you have to come up with $377,730 up front to reserve your EC2 instances for three years. So your first-year cost is not $551,270, but $803,081 – that’s actually $66.92/user/month for the first year, and then it drops to $35.45/user/month in years two and three, then back to $66.92/user/month in the fourth year, because you’ll have to come up with the reservation fees again at the beginning of year four.

There are a couple of other things about this model that are troublesome:

  1. By default, it assumes only a single file server for 1,000 users, meaning that you would administer security strictly via AD permissions. It also means that if anything happens to that file server, all of your tenants are impacted. If we instead provision ten file servers, so that each of the ten tenants has a dedicated file server, it bumps the average cost by roughly $5/user/month.
  2. If your user count is 100 users per tenant, but you’re expecting to support 150 users per XenApp server, you’ll obviously have users from multiple tenant organizations running concurrently on the same XenApp server. This, in turn, means that if a user from one tenant organization does something that impacts XenApp performance – e.g., launches the Production Planning Spreadsheet from Hell that pegs the processor for five minutes recalculating the entire spreadsheet whenever a single cell is changed – it will affect more than just that tenant organization. (And, yes, I know that there are ways to protect against runaway processor utilization – but that’s still something else you have to set up and manage, and, depending on how you approach the problem, potentially another licensing component you have to pay for.) If we assume only 100 users per XenApp server, so that we can dedicate one XenApp server to each tenant organization, it bumps the average cost by roughly another $1.50/user/month.

“But wait,” you might say, “not many VARs/MSPs will want to – or be able to – build an infrastructure for 1,000 users right off the bat.” And you would be correct. So let’s scale it back a bit. Let’s look at an infrastructure that’s built for 250 users, and let’s assume that breaks down into five tenants, with 50 users each. Let’s further assume, for reasons touched on above, that each customer will get a dedicated file server, and one dedicated XenApp server. We’ll dial those XenApp servers back to “High CPU Extra Large” instances, which have 4 vCPUs and 7.5 Gb of vRAM each. Your average cost over three years, still assuming 3-year reserved instances, jumps to $168.28/user/month, and you must still be prepared to write a check for just over $350,000 for the 3-year reservation fees. Why the big jump? Primarily because there is a minimum amount of “overhead” in the server resources required simply to manage the Citrix infrastructure, the multi-tenant Active Directory and Exchange infrastructure, etc., and that overhead is now spread across fewer users.

Now consider that all of the prices we’ve been looking at so far cover only the compute and storage resources. We haven’t begun to factor in the monthly cost of Citrix or Microsoft Service Provider licensing. In round numbers, that will add another $25/user/month or so to your cost, including MS Office. Nor have we accounted for the possibility that some of your users may need additional SPLA applications, such as Visio or Project, or that some tenants may require a SQL server or some other additional application server. Nor have we accounted for the possibility that some of your tenants may require access to the infrastructure on a 24×7 basis, meaning that their servers have to run 24 hours per day, not just 14.

This is why, at the aforementioned session at the 2014 Citrix Summit conference in Orlando, the numbers presented in the session were challenged by several people during the ensuing Q&A, the general feedback being that they simply didn’t work in the real world.

So let’s quickly review where we are: As stated in Part 1 of this series, an aspiring hosting provider has four basic choices:

  1. Buy hardware and build it yourself. This was discussed in Part 1.
  2. Rent hardware (e.g., Rackspace) and build it yourself. This was not covered in detail, but once you’ve developed the list of equipment for option #1, it’s easy enough to get quotes for option #2.
  3. Rent VMs, as we have discussed above, and build it yourself.
  4. Partner with someone that has already built the required infrastructure.

We would respectfully submit that, for most VARs/MSPs, option #4 makes the most sense. But we’re biased, because (full disclosure again) ManageOps has already built the infrastructure, and we know that our costs are significantly less than it would take to replicate our infrastructure on EC2. And we’re looking for some good partners.

In Part 3, we’ll go into what we believe an infrastructure needs to look like for a DaaS hosting provider that’s targeting the SMB market, so stay tuned.

So You Want to Be a Hosting Provider? (Part 1)

If you’re a VAR or MSP, you’ve been hearing voices from all quarters telling you that you’ve got to get into cloud services:

  • The 451 Research Group has estimated that, by 2015, the market for all kinds of “virtual desktops” will be as large as $5.6 Billion. IDC estimates that the portion of these virtual desktops sourced solely from the cloud could be over $600 Million by 2016, and growing at a more than 84% annually.
  • John Ross, technology consultant and former CTO of GreenPages Technology solutions was quoted in a crn.com article as saying, “This is the last time we are going to see hardware purchases through resellers for many, many years.” He predicts that 50% of the current crop of resellers will either be gone or have changed to a service provider model by 2018.
  • The same article cited research by UBM Tech Channel (the parent company of CRN) which indicated that “vintage VARs” that stay with the current on-premises model will have to add at least 50% more customers in the next few years to derive the same amount of sales, which will require them to increase their marketing budgets by an order of magnitude.
  • Dave Rice, co-founder and CTO of TrueCloud in Tempe, AZ, predicted in the same article that fewer than 20% of the current crop of solution providers will be able to make the transition to the cloud model. He compares the shift to cloud computing to the kind of transformational change that took place when PCs were first introduced to the enterprise back in the 1980s.

If you place any credence at all in these predictions, it’s pretty clear that you need to develop a cloud strategy. But how do you do it?

First of all, let’s be clear that, in our opinion, selling Office 365 to your customers is not a cloud strategy. Office 365 may be a great fit for some customers, but it still assumes that most computing will be done on a PC (or laptop) at the client endpoint, and your customer will still, in most cases, have at least one server to manage, backup, and repair when it breaks. Moreover, you are giving up a great deal of account control, and account “stickiness,” when you sell Office 365.

In our opinion, a cloud strategy should include the ability to make your customers’ servers go away entirely, move all of their data and applications into the cloud, and provide them with a Windows desktop, delivered from the cloud, that the user can access any time, from any location where Internet access is available. (Full disclosure: That’s precisely what we do here at ManageOps, so we have an obvious bias in that direction.) There’s a pretty good argument to be made that if your data is in the cloud, your applications should be there too, and vice versa.

The best infrastructure for such a hosting environment (in the opinion of a lot of hosting providers, ManageOps included) is a Microsoft/Citrix-powered environment. Currently, the most commonly deployed infrastructure is Windows Server 2008 R2 with Citrix XenApp v6.5. Microsoft and Citrix both have Service Provider License Agreements available so you can pay them monthly as your user count goes up. However, once you’ve signed those agreements, you’re still going to need some kind of hosting infrastructure.

Citrix can help you there as well. Once you’ve signed up with them, you can access their recommended “best practice” reference architecture for Citrix Service Providers. That architecture looks something like this:
CSPArchitecture

When you’ve become familiar enough with the architectural model to jump into the deep end of the pool and start building servers, your next task is to find some servers to build. Broadly speaking, your choices are:

  1. Buy several tens of thousands of dollars (at least) of server hardware, storage systems, switches, etc., secure some space in a co-location facility, rack up the equipment, and start building servers. Repeat in a second location, if geo-redundancy is desired. Then sweat bullets hoping that you can sign enough customers to not only pay for the equipment you bought, but make enough profit that you can afford to refresh that hardware in three or four years.
  2. Rent hardware from someone like Rackspace, and build on their platform. Again, if you want geo-redundancy, you’re going to need to pay for hardware in at least two separate Rackspace facilities to insure that you have something to fail over to if you ever need to fail over.
  3. Rent VMs from someone like Amazon or Azure. Citrix has been talking a lot about this lately, and has even produced some helpful pricing tools that will allow you to estimate your cost/user/month on these platforms.
  4. Partner with someone who has already built it, so you can start small and “pay as you grow.”

Now, in all fairness, the reference architecture above is what you would build if you wanted to scale your hosting service to several thousand customers. A wiser approach for a typical VAR or MSP would be to start much smaller. Still, you will need at least two beefy virtualization hosts – preferably three so if you lose one, your infrastructure is still redundant – a SAN with redundancy built in, a switching infrastructure, a perimeter firewall, and something like a Citrix NetScaler (or NetScaler VPX) for SSL connections into your cloud.

Both VMware and Hyper-V require server-based management tools (vCenter and System Center, respectively), so if you’ve chosen one of those products as your virtualization platform, don’t forget to allocate resources for the management servers. Also if you’re running Hyper-V, you will need at least one physical Domain Controller (for technical reasons that are beyond the scope of this article). Depending on how much storage you want to provision, and whose SAN you choose, you’re conservatively looking at $80,000 – $100,000. Again, if geo-redundancy is desired, double the numbers, and don’t forget to factor in the cost of one or more co-location facilities.

Finally, you should assume at least 120 – 150 hours of work effort (per facility) to get everything put together and tested before you should even think of putting a paying customer on that infrastructure.

If you’re not put off by the prospect of purchasing the necessary equipment, securing the co-lo space, and putting in the required work effort to build the infrastructure, you should also begin planning the work required to successfully sell your services: Creating marketing materials, training materials, and contracts will take considerable work, and creating repeatable onboarding and customer data migration processes will be critical to creating a manageable and scalable solution. If, on the other hand, this doesn’t strike you as a good way to invest your time and money, let’s move on to other options.

Once you’ve created your list of equipment for option #1, it’s easy enough to take that list to someone like Rackspace and obtain a quote for renting it so you can get a feeling for option #2. The second part of this series will take a closer look at the next option.

“Data Gravity” and Cloud Computing

Last July, Alistair Croll wrote an interesting post over at http://datagravity.org on the concept of “data gravity” – a term first coined by Dave McCrory. The concept goes like this: The speed with which information can get from where it is stored to where it is acted upon (e.g., from the hard disk in your PC to the processor) is the real limiting factor of computing speed. This is why microprocessors typically have built-in cache memory – to minimize the number of times you have to go back to the storage repository to access the data. What’s interesting is that this has practical implications for cloud computing.

Microsoft researcher Jim Gray, who spent a lot of his career looking at the economics of data, concluded that, compared to the cost of moving bytes around, everything else is effectively free! Getting information from your computer to the cloud (and vice-versa) is time-consuming and potentially expensive. The more data you’re trying to move, the bigger the problem is. Just as a large physical mass exhibits inertia (i.e., it takes a lot of energy to get it moving) and gravity (i.e., it attracts other physical objects, and the larger the mass the stronger the attraction), a large chunk of data also exhibits a kind of inertia, and tends to attract other related data and the applications required to manipulate it. Hence the concept of data “gravity.”

The implication for cloud computing is that, if you’re going to move your data to the cloud, then it only makes sense to have the applications that will access and manipulate that data in the same cloud, and vice versa. Otherwise, you’re going to constantly be moving data in and out of the cloud repository. And, of course, that’s what DaaS (Desktop as a Service) is all about: We move your data to the cloud, and we give you a cloud-based desktop session in which your applications run. Your data and your applications in the cloud – together, as the concept of data gravity suggests that they should be.

And, of course, you get additional benefits, such as the ability to work from any location where you have an Internet connection from any device that will run the Citrix Receiver client, you have an instant DR/BC plan, you get fast and easy scalability, and you can immediately support a BYOD (“Bring Your Own Device”) initiative.

Don’t fight gravity…

ManageOps Debuts “Follow-Me Data” Cloud Service

Follow-Me Data, the next big thing in cloud storage and back up.

Woodinville, WA, February 14, 2014– Today ManageOps announces the availability of Follow-Me Data as an enhancement to its cloud services. This new service solves one of the most frequent challenges when working within cloud environments: How to get to your data when there’s no Internet connectivity available.

Cloud computing is dependent upon a good Internet connection. Before, the lack of an Internet connection meant you had to settle for impaired productivity or make do with painful workarounds like downloading files in advance, bringing them with you, and manually uploading them later. Beginning today you are no longer stuck when connectivity is not available.

With Follow-Me Data, your important files are with you all the time. The service provides a folder on your client device (Windows, Mac, Android, tablets, Linux) that automatically synchronizes your changes the next time you connect to the Internet, creating a seamless user experience. For example, you can work with documents while on an airplane, and your files will automatically be synchronized when you arrive at your destination. Your colleagues will have access to your most recent work within seconds with no additional effort.

Another business challenge this solves is getting important company data off individual devices and into your cloud to insure privacy, security, and recovery. With Follow-Me Data, files are automatically secured on your company’s cloud any time Internet connectivity is available.

ManageOps specializes in complete hosted private cloud solutions that enable mobility, BYOD (bring your own device) programs, and deliver a more secure and reliable computing environment than most in-house solutions can offer. The addition of Follow-Me Data boosts our customers’ ability to work remotely, as documents will now be easily accessible from a variety of client devices whether they’re on or offline.

“One of the problems inherent in cloud computing is the synchronization of cloud data across multiple devices. Our new ‘Follow-Me Data’ feature uses industry leading cloud technology to solve one of our customers’ biggest challenges by synchronizing your data files to all of your client devices.” Says Scott Gorcester, CEO of ManageOps.

This feature can be provisioned for specific users, e.g., those who travel frequently, but can also empower users companywide. With Follow-Me Data service in your cloud, every employee can share documents and have the most recent versions at their fingertips in real time. Folders can be managed by an administrator to provide an additional layer of protection, while still granting granular access and security controls to users. For added security, there is a remote wipe option for all devices and desktops – making it possible to remotely remove confidential company data from a lost or stolen client device.

# # #

If you would like more information about this topic, please contact Laura Gorcester at 425.939.2704 or email at
Laura.Gorcester@www.manage-ops.com.

.

ManageOps’s CEO Moderated “Defining Software Defined Storage (SDS)”

FOR IMMEDIATE RELEASE – February 7, 2014

Bellevue, WA, January 28, 2014– ManageOps’s CEO, Scott Gorcester moderated a panel discussion on Software Defined Storage (SDS) at Bellevue’s City Hall, sponsored by ATI (Abasys Technology Inc.). Some of the topics discussed included HIPAA compliance, hardware comparisons, and technical flexibility of SDS solution sets. The audience asked questions like ‘How does SDS address problems like HIPAA compliance?’ and ‘Are there recommended configurations for getting started with SDS?’. Mr. Gorcester led the panel consisting of local experts including Steven Umbehocker, CEO and Founder of OS Nexus, and Paul Bibaud, Solutions Architect with Pogo Linux/Pogo Storage.  About the event, Mr. Gorcester said,

“This event provided a forum to share ideas and valuable information with experts in highly relevant technologies. We worked to serve not only the technicians in the audience, but to introduce the business practicality to the discussion as well.” – Scott Gorcester

A number of technology experts were in the audience as well. Irene Hutchinson of ATI, attended to gain ideas on a burgeoning technology space.

“The independent nature of these events gives me expertise on specific topics that are valuable to my business and my clients. These events are moderated by experts who can clearly explain complex technologies and their business impact,” said Mrs. Hutchinson.

ManageOps, based near Seattle, WA, is a Cloud Services provider with expertise in cloud solutions based on Microsoft, Citrix and other technologies.  Sold mainly through channel partners, the ManageOps cloud platform is stable, secure, scalable, and flexible enough to meet the needs of businesses of all sizes.  ManageOps works to earn the respect and trust of partners and clients by providing creative technology solutions with friendly and accessible customer support.  Learn more at www.ManageOps.com.

Northwest Tech Alliance, based near Seattle, WA, is an independent technology association dedicated to bringing together some of the brightest minds from the technology industry. NWTA events are focused on helping attendees network with other technology industry professionals, provide education and information relative to the latest technologies in the industry and generate opportunities for personal, professional and business growth. Learn more at www.nwtechalliance.com.

####

If you would like more information about this topic, please contact Laura Gorcester at 425.939.2704 or email at Laura.Gorcester@www.manage-ops.com.