skip to main content
Media Window
Author of the Resource Posted on by Peter Cooke in Application Virtualization, Customers

"Losing control and loving it" - Delivering virtual apps at Queen’s University

In this presentation, given at a recent Software2 User Day, Canada's Queen's University explain how they've made the move to virtual applications, providing students with access to software on-demand.

Titled "losing control and loving it", the team at Queen's University describe their journey from traditional software deployment methods and imaging on-site PC labs towards virtualization. Learn how the IT department are now able to deliver their software to students in a fraction of the time it took previously.

Learn how Queen's University have solved some common challenges, including:

  • Ever-increasing image sizes
  • Reimaging labs every summer
  • Software only available in set labs
  • "Disk imaging cycle of pain"
  • Delivering heavyweight applications
  • Complex virtual desktop environments



Delivering virtualized applications across campus - Video transcript

Okay. So, next up is Queens University.

So, if Sheridan are the grand old senior folks of this adventure, we are the newbs. We've been at this since just earlier this year. And I'm going to tell you a little bit of a story about how we kind of got here. It's probably a journey that you will resonate with. I think most of you have been through this, some of this same sort of process, but it kind of sets the stage for Paul and Graydon to talk about where we are now in terms of AppsAnywhere.

So, we are a team within Queens University. We are the faculty of engineering and applied science. We have our own IT group. That'd be us. And we work with central IT. So, Queens is a slightly decentralized from an IT point of view. Central IT handles networks, and telephone, and a bunch of shared things. And we look after things that are unique to the faculty of engineering. So, so far we're the only people on the campus using AppsAnywhere. But I suspect, when I start showing it off, there's probably going to be some interest from other places. Karen will be happy to know.

So, where did we come from? It's sort of in the last decade. Well, like most of you, nine or ten years ago we had a lot of physical computers in labs, rooms full of physical hardware. And there were a lot of issues with that. We had to keep coming in and fixing things, power supplies, dead hard drives, all that kind of stuff. Because we're teaching engineering, a lot of our software is extremely heavy, very resource heavy, very large. That made the images kept getting bigger. Our instructors kept choosing new pieces of software. They couldn't use the same 3D CAD program that the other guy used. They had to have a different one because it drew chemical symbols better or something. So, the images kept getting larger, which meant the whole imaging thing happened.

We had to keep rip and replacing every summer. A certain number of labs had to get redone. You couldn't use the software unless you were in the lab. And, of course, the dreaded what we call the disk imaging cycle of pain that you probably all went through. We tried to get this off to the instructors to tell us what they needed back in July, but nobody was around because they were all off doing research for the summer. We'd finally find out the third week in August. We'd build a disk image. There would be all these conflicts because the software was jumping all over itself and needed different versions of Flash. We'd finally get something built. We'd test it. They'd come in. We'd find more conflicts. And then, just when we thought we'd had it done, one of the pieces of software would get upgraded. And we'd start this whole process again.

So, it became an incredibly time consuming a process. And we had to get to a point with the instructors and saying, sorry, no. You can't have this until January, which was not popular on the first week of September. But the labs had their advantages. There was a lot of resources in there that we were making use of to run all this software. And we can't tell students that they have to bring a laptop. We're largely government funded. That becomes a class fee. We can't have class fees beyond a certain point. So, we can't say to our students, you must bring a laptop. So, we provided labs for them to work in.

So, we had to figure out something a little better. So, we made then the step into virtual desktops. And virtual desktops solved a lot of those problems in terms of our ability to a particularly a new piece of software coming in. We could deploy them. We could roll them out relatively gently without breaking other things, or at least if we were going to break it, we would know without it getting into production. Reduced conflicts, got rid of a lot of stuff.

We had one problem which was that, because we were now virtually never walking into the computer labs, we had to make a deal with the cleaning staff to let us know when someone spilled a Coke all over one of the keyboards. Because we just otherwise wouldn't find out, which was a good thing from our point of view. And, of course, it reduced a lot of lifestyle costs. And the students really liked the ability to access the virtual desktops remotely, which they started doing a lot.

Virtual desktops, right? Some of you have been through this experience. We made, what I would consider, two errors in this going back early on. One is, I told you earlier that central IT manages all our campus networks. And virtual desktops, as you probably know, they don't require a lot of bandwidth, but they require very clean bandwidth with very low latency. And our campus has yet to implement quality of service on the campus backbone, believe it or not. We're now putting in a VoIP phone system, so that's going to be entertaining. I think they'll have quality of service by then.

But they didn't have it when we were trying to do this. And we just found performance issues across, especially in remote buildings. You move the mouse and the cursor would kind of maybe not quite keep up. And that was extraordinarily annoying for people trying to use them. So, the first mistake was not sort of dealing with the networking piece early enough.

The second mistake was that we told people we were giving them a virtual desktop. And we wish now we had never done that. What we should have done is walked up and say, here's your brand new computer. Look how tiny it is. Because virtual desktops became the catchall for every problem that people had on their system. Every Windows error suddenly was a virtual desktop error. Every time something slowed down, it was because of the virtual desktop, even if it wasn't. We had one woman, one staff member a couple of months ago, call us and complain that her virtual desktop was really slow and she couldn't open her Excel properly. She wasn't even on a virtual desktop. She was on a physical machine, but it was like everything was the virtual desktop's problem. Right? So, the whole perception issue became a big one.

Also, although we can't tell students to bring their laptop, they're all bringing laptops. They're engineering students. They're all coming with laptops. And they wanted to use their laptops. They didn't want to have their desktop replaced by our desktop to be able to run our software. Department heads are more and more want to take up some of that space and turn it into more social stuff. They want to take some of those computers out and put in couches so students can sit and do group work with each other and not necessarily in a lab situation like this room. And the other problem we're running into is that we're building these active learning classrooms and they have wifi and they don't have computers in them at all. So, this whole model of the virtual desktop wasn't really working so well.

So, a couple of years ago I saw, actually I think, a presentation by Sheridan. And it pains me to use this word, but it was a paradigm shift. It was a whole new way of thinking about how we were going to distribute applications to look at what Software2 was doing. And so, it occurred to me that we could suddenly use all those resources to the students were walking in with, some of them are really nice resources. I mean, Mac users could run the virtual desktops before, but we have probably about a third Mac, believe it or not, for an engineering facility. A number of our teachers are very, very keen Mac users. So, that helped.

So, it was funny because in a way we were going backwards. We were kind of losing control. We'd gathered control with the virtual desktops. Now we were kind of giving it up again, but in a controlled way, in a way that we were sort of holding onto the pieces that we need. And, of course, the app store model, the students all know and love. So, we went live in September and they just kind of started using it. So, I think that takes it to you guys.

Hi, my name is Graydon Smith. I'm the manager of systems and development. So, my team is responsible for the backend. What we're going to do is we're going to take you through a timeline of basically every stage that we went through for our implementation. The story should be pretty familiar with all of you. It's just more of a telling of what our experience has been with AppsAnywhere.

So, we did our first implementation kickoff meeting on April 15th. And yeah, that was pretty good. We got all the information we needed. We had the pre-installation call well, and then we just basically went and built our environment. So, everything we do is in VMware. So, we have a VMware vSphere environment. We use Compellent storage on the backend. And we just did, everybody that does AppsAnywhere is pretty much familiar with this. We use HAProxy. We use a Weighted Least Connection for our load balancing as opposed to a Round-robin. I think Sheridan does Round Robin. Any reason why you guys chose Round-robin or-

I didn't know your presentation was interactive.

Yeah. Oh, by the way, stop me at any time. Please stop me at any time if you have any questions. I'm intrigued. All right. So, we went for a Weighted Least Connection simply because we wanted to make sure that we were effectively distributing the load across. It did pose some challenges, especially around the header tracking. But, for the most part, we just did the vanilla. We're using roughly the same amount of storage and the same amount of processing that Sheridan is using. So, it's pretty generic. Next slide, please.

So, we did that and we did our installation onsite training. And, at the end of it, we had five apps. Great.

I did one.

Yep. We were pretty happy. So, yay. We got five apps all done. Yeah, we had to undo everything he did. But anyway, we'll leave that alone. So, yeah. So, we're doing this. We've got the basic install. Next slide. One of the things that was really important for us though was, as Stephen mentioned with the Macs, we really needed that support for the non Windows computers. So, we started digging in. We thought, well, we're not going to buy the parallels piece. We're going to try some things. So, I went right into the Horizon view VDI component connecting it. It wasn't exactly what I expected out of the gate configuring it. It wasn't overly straightforward. We have a licensing issue around VDA licensing. Doesn't really solve the problem with that because technically you need a VDA license for any session that you spin up on your Horizon view. We're kind of trying to move away from that license model because it is quite expensive with our campus agreement.

So, then I thought we'd go to our native RDS, the native Windows remote desktop services. And that one, because we are licensed on campus for that through our campus agreement. So, it works. It's not overly flexible in that you only get one farm that you can work with. Authentication was a bit of a pain for us because you had to also authenticate into the RDS as well, or at least that was my experience. And so, in talking with Phil, and with talking with Software2, this is where I spent most of my time over the summer when we were doing things. And again, we still had some licensing questions around that, but we weren't making a lot of progress over the summer.

But the creation of apps just keeps going at that time. And then, we went to... what was it we had? Oh, yes. So, we then... in this, Paul will talk about this.

So, we decided early on that we wanted to create a formal process for people to request new apps in the app store. And... go ahead. So, this is the workflow that we came up with. And we have a service request on our service desk that matches this. And we wanted to make sure that we had some separation of duties. So, my team, which is the support team, once an app has been approved, it's my team that does the packaging. So, they do the initial build of a package. They then do a local test. So, not in production or on the infrastructure, just using the player. And, if that passes, it gets handed off to Graydon's team. And Graydon's team puts it into the environment and does a run test there as well.

We have two provisions. So, instead of using two separate test environments, we actually did it as like IM provisions. So, we do it as a separate provision. It's still the same URL, but that way we can keep them separate.

And, if the run test passes, we then notify the requesting user. And, for all of the ones we did this summer, that was me. And the end user who made the request does their own test. And the reason we did that was a lot of the software is stuff that we do not have domain knowledge on. So, we can run them, but we don't know that they're actually operating the way they're necessarily supposed to. So, the requests are typically, we'll do a test. And, if that's approved, then it goes live in production.

So, yeah. So, after we did the official app request process, then Phil came on site. Met with us, talked about best practices that we were supposed to do. We were already pretty much ahead of the game. A lot of stuff he talked about like having that process down, we were there. We started doing app packaging. We're building a few apps. So, we're at 18. And then, I'd given up on the RDS and on the Horizon view and said, that's it. Let's put it in a trial of Parallels.

So, we put it in a trial of Parallels. Compared to all of the problems we have with the native RDS and stuff, the trial of Parallels was a couple hour job. Like I was absolutely astonished at how easy it was to put it in, even from a trial perspective. Go, next. So, basically for our trial, we just introduced it gateway and one RDS server off to the side. That's all we had to do. We didn't load balance it at this point. We were just trialing it. We were playing with it. And the Mac users, all of a sudden were working. We found a few little problems like when you build the package for Cloud paging, you actually have to specify a server. But it's standard stuff. But, yeah. It was going quite well.

And so, at the end of that, we're still packaging apps in the background. We're doing things. We're at about 37 apps right now. We're feeling good. Phil, when he came by, really spawned us. Made us like, okay, let's get moving on this. But we're at what, August 8th at this point?

August 8th, yeah.

Yeah. So, term is starting to loom and we've got 37 apps.

So, then we had App-a-palooza. So, from August 8th to September 5th, which was the first day of class, it was an all hands on deck. So, my team and Graydon's team were both doing packaging. And that was basically the top priority was just pushing and getting apps put in. And so, start of term, we had 97 apps ready in the store.

Once we got the process down though, it really moved quite quickly.

Yes.

It was really quite impressive. So, at that point, then it comes to game day, which was September 5th, the start of classes. I'm going to talk about a class right now, APSC 143. This is our engineering introduction to programming. Okay? And we use a piece of software called CodeLite. The way that this used to be done is that the students would show up into one of these wireless, active learning classes that Stephen talks about. So, there's no wired connections. Every student is wireless. And the instructor would have them all download CodeLite, right then, right there, do the install, and configure. So, you've got over 200 students in a room accessing the wireless, all trying to download this 500 megabyte... well, it's about 150 megabyte piece of software, and then it packages out to 500 megs, all at once.

And so, you can imagine it just doesn't work. Every single lab failed. So, the instructor, he's come to us a few times saying, how do I do this? And we said, well, let's use this. So, we packaged it up for him. And, on game day, the very first day, September 5th, we went in. There were 250 students all in the wireless. It's a wireless only classroom. I'm still only running a Parallels trial at this time. So, I've got a third of all of the students in that room using Macs. And they all start launching it. Not a problem. We have over 200 students all connecting, pulling down CodeLite. They're getting up and running. He doesn't have to configure the environment for them anymore. We've already configured that for them. He just gets them going. They're right in there. Hello, world, right out of the gate. The performance, we didn't see any issues.

The only issues that we did discover about this though were we had some students showing up with 32 bit machines that we hadn't considered. Okay. Well, that was a bit of an issue. So, we had to do some Parallels tweaking. And the one place that we did find was that we were still running a trial of Parallels that didn't scale very well. So, we had to suddenly throw a whole bunch of resources at that. And realize, okay, we have to go production on Parallels now.

So, as of October 8th, we got our real Parallels in. We've been limping along. I've been adding more RDS servers to the Parallels, but it just wasn't set up for load balancing or anything like that. So, just this past October 8th we basically changed our architecture here. So, now we run our Parallels directly through the HAProxy. But the RDS farm is exposed out on our public network for the campus. And it's running really quite well. We have a few things that we got some issues with around session management and stuff, but for the most part, when it comes to the managing the non Windows, I think that Parallels, if you haven't done it, I would highly recommend trialing it. Yeah.

And so, as of today, we now have 114 apps in the store. And we're continuing to work on the last few that are still in the queue.

And we have how many for the next semester? About 20?

20, yeah.

20 more to go. And then, next steps.

I have a question. Why didn't you use Software2's packaging service? Oh, well as a matter of fact, Stephen, I'll let you know why that was. We didn't use the packaging service initially because... well, I think part of the reason was we just kind of got on a roll. Right? And there's a certain advantage to getting your team to do the packaging, at least initially. It's pilots, yeah, they can fly under autopilot, but I need them to be able to take control of the airplane if necessary. So, I think getting the team... and also, and correct me if I'm wrong, but I think there's a real sort of sense of accomplishment and of ownership of the system. And so now... at the beginning of August I said to Paul, okay, you have to make a point at some point this summer and decide, okay, we're done packaging. The image is going to have this on it and we'll use AppsAnywhere for this. And Paul said, no. We're going to do them all. And I said, you don't have time to do them. And he said, no. We're going to do them all.

And you know what? They got them done. I mean, they got them all done. And I think that was kind of rallying the troops and getting them going on this and just blowing through. So, I mean, kudos to both Paul and Graydon and their teams who just got this rolled out. And I think that was a big help. That's not to say that I don't think... we were just talking about the packaging service earlier. I think there may be some benefits to us now to sign on and make some use of it, and particularly for the updates going forward. But I think it was a great experience for everybody to just kind of get this done for us. And it worked beautifully, so.

Yeah. There were some large packages that we probably would have been better off to use a package service because we got hit with them. Like... good examples?

Solidworks was like one.

Solidworks, yeah.

Some of the idiosyncrasy around Eclipse and dividing that out was a little bit of a pain.

Anyway, any other questions? Anybody?

How big is your packaging team?

My team? I've got two people. So, we're not just packaging. My systems team, we do all the research support and infrastructure services for the faculty of engineering. So, we're predominantly not the packaging.

So, my team, I have three techs plus a dispatcher. So, it was just the three techs that were doing packaging as well as myself. But then, dealing with tickets on top of all of that. So, we don't have anyone dedicated to only packaging? We're generalists.

Yeah. Our model is split up in the way of, we have a first level support and a systems team. But we also provide a higher level of support where the systems team has difficulty. And, in the areas where the packaging service probably would benefit, is when the first level packaging team came across some really obscure things about the piece of software. And would require my team to go in and find the registry settings, or all of the little tweaks, what needs to be in layer four versus what needs to be in layer three, what needs to actually be down lower.

Well, it was great for us to learn that stuff. But, in the future, we probably would be better off just saying, just hand it off. Right? So...

So, in you're request for a funnel, there is a check there for approval for the IT...

Yeah. So, that step comes to the three of us for us to review and approve whether the application should even be done. So, that's if, say, someone requests a piece of licensed software, but we don't own any licenses for it.

Yeah.

All right?

Or if we actually have a piece of software that is already licensed, it allows us to look and see is there a way to consolidate licenses here? Right? Because our faculty members don't always communicate with each other on what pieces of software it is they want. Right?

No. Yeah. So, it's more just a check to make sure that we're able to provide that piece of software and be in compliance with licensing. And that it's something that meets what we're really trying to accomplish with the school. So, we're not going to package Doom or something like that. Right?

Yeah. And there's actually another reason why we do this. And it's, what's the best way to package this? Right. So, either Paul and I will talk about it and say, is this... there was the talk about adding plugins to packages. Do we actually want to do it that way? Or do we just want to build it as a totally separate package? That's the point where we make that decision, where we actually decide what is the best approach so that Paul can direct his team on what they should be doing.

Yeah.

So, it's a catchall, right?

What's your process in terms of managing sessions? I understand the RA's, we use profile in the systems, right? So, do you get out there? Do you add the profile? Or do you [inaudible 00:24:37].

Do you add what? Sorry.

The user profile, when they are... profile assistance.

Okay. So, the question is regarding profile. So, user profile management on the RA's RDS. From our VMware view side, we were using UEM to do our profile management across the discs... or sorry, across systems that are... when we do the... pardon me?

Non-persistent.

Non-persistent disk. Thanks. That's the phrase I was looking for. And we still utilize UEM for that. So, they basically get a profile and that it just gets removed periodically. But we use UEM for persistence across the machines. And then, we also use a product called FileCloud for our network file storage. So, they get a connection there.

Most of the people that use the Parallels RA's though, like to use their local storage. And so, they're actually able to connect with their Macs or whatever their device is or their 32 bit Windows machines. They can actually connect to the local storage on the machine. And that way they at least get persistence across. But, when it comes to user settings, we use UEM for that. Whether we continue with UEM, we have a project to evaluate a UEM replacement, if we're going to stick with that or we're going to move to something else. So, great question though. Anything else?

Thank you, guys.