Applications Anywhere at University of Wolverhampton: What, Why, How?
Nici Cooper, Assistant IT Director at Wolverhampton University, shares how they are delivering and delpoying software applications to students anywhere, anytime, on demand.
Applications Anywhere at Wolverhampton University: What, Why, How? - Video Transcription
Good afternoon, everybody. I’m Nick Johnson, one of the founders of Software2, and I’m at our head office in Leeds in the UK today. Sat beside me is my colleague Leon who looks after our European customer base—I know some of you have met Leon in the past. Thank you all very much for your time also, we appreciate it. And thank you to EUNIS for the opportunity to present to you today. I just wanted to give a very quick introduction on us and what we do before I hand the floor over to our guest speaker, Nici from Wolverhampton.
We now work with about 100 universities, across 12 countries, and something like 90% of our customer base is today in Europe, but we did also open an office in the US last year, where I now spend some of my time. We work exclusively in education and as a team, we’ve probably met 600 to 700 colleges and universities now. What’s interesting to me is the similarities. People talk about exactly the same problems to us:
My desktop image is too big, my logon screens are too slow, how do I deliver SPSS, MATLAB, SolidWorks?
Some sales guru would probably tell me off for not changing my presentation but we do see everyone is talking about the same challenges. Which is so great to see organizations like EUNIS where we can share some of these experiences and help each other out.
We see application deployment challenges in college and universities as really unique and one that is like no other sector. A large university like Wolverhampton could have thousands of open access machines. You then have hundreds of pieces of software all leading to the access by potentially tens of thousands of students. And this challenge is unique. In a corporate environment, we probably use the same two or three apps on the same machine all the time. That’s completely different to an open access machine and that is what is unique. Within other sectors, if you don’t rebuild your machines every summer, it’s not the end of the world. You obviously can’t simply shut every application down because they would grind to a halt. It gives me great pleasure to introduce Nici Cooper from the University of Wolverhampton. We’ve been working with them for 2 or 3 years now but I guess it’s only in the last 6 or so months where they’ve gone big bang and now offer this wonderful service to their students. So I hand this over now to Nici, to talk about their Applications Anywhere project.
Nici Cooper - Wolverhampton University
Thank you, Nick. Hi, everyone. It’s really great to be participating in this webinar with colleagues from Software2, with EUNIS community, and to have the opportunity to share with you our Application Jukebox [AppsAnywhere]. We named our project Applications Anywhere, because ultimately that is what we’re hoping to achieve. I’m intending today to share with you what that project is, why we took it on, and how we’re going about delivering it.
I wanted to start with a little bit of context. This project is part of a wider Digital Campus Transformation Programme. It’s a 5-year university-funded program of activities, and it’s basically seeking to deliver and online environment open to all members of the university community, a digital space, which supports lifelong learning and personal development, and a community and an ecosystem that transcends traditional university boundaries. The aim of the program is to ensure that the digital campus is as important as the physical campus. At our university, we focused a lot on buildings over the last few years and it think it’s fair to say we probably neglected the digital side. It’s one of 5 high profile big book’s projects, and it’s as a result of direct feedback from the students regarding their access to specialist software, to support their teaching and learning. The first phase is to deliver Windows software to university aimed computers, but ultimately, we’re aiming to deliver university software to any Windows device on or off campus, and to implement a similar solution for our Apple users. All of the projects in this foundation phase have a high level sponsorship. Our sponsor is the dean of students. It’s a little bit of a double edged sword because it does make it easier to argue for resources and to push others to cooperate with us, but it also means that any failure is highly visible and extremely undesirable, and I really don’t want to go there. And our projects is the first one to go live.
A little bit of history might help too. We have a very large software portfolio, somewhere around 420+ Windows applications, and that excludes Linux, Mac, and other specialized areas like games labs and network labs. That portfolio seems to increase in size each year and the images are really becoming almost unmanageable, very unwieldy. The requirements gathering usually starts in February, with the aim of being complete by May, to enable specialist images, per faculty and school, as Nick talked about a few minutes ago, to be built, tested, and deployed over the summer period. In reality, it’s rarely achieved. The requirements from academics keep drip feeding in all over the summer build period, which means the builds are ever later getting finished and the deployment in the end is a mad panic because there is this immovable deadline for the start of teaching. The students have told us very clearly that they were dissatisfied with access to specialty software. They couldn’t make use of the areas with the longest areas, for example, the learning centers, that are now open 24 hours a day, or get access to software at weekends. The current process is inflexible and it means that the faculties only have one opportunity per year to request software for teaching. In the summer, we now have an opportunity for doing the builds and deploying the software is reducing year on year. And that won’t just be us, I’m sure; it’s a picture you’re all familiar with. There is no link to all the university processes, so for example, the academic development process, where new courses are designed and resources are identified. People think about the books they need and the other online resources they need to support the modules, but they never think about the software and that really needs to change. Every year, we build the specialist images from scratch, despite the fact that 70% of that portfolio remains unchanged year on year. Applications are rarely, if ever, retired, and there can be a number of applications to do basically the same thing because the portfolios are based on individual academic preference. Nothing is ever challenged, because we don’t have any usage data, so by the time we hit the academic year, generally the result for our staff is this: STRESS.
They work stupid hours, including weekends, they are exhausted and sometimes bad-tempered, and they have no break before they’re back into the day-to-day business as usual. So what are we trying to do by setting up this project? Our objective is really to make Windows applications required for teaching available to students and staff, wherever and whenever they need them, on a Windows device of their choice. We also want to develop and deliver a solution that will enable our Apple users, who are a growing community, to download and install software applications on both the university owned and personal Apple devices. We want to undertake an audit and assessment of existing and new software applications for their suitability in making them available as virtualized applications. We want to identify and understand the issues around software licensing, associated with delivering virtually. We wanted to divine and develop appropriate monitoring and reporting of developing software licenses because we don’t do that particularly well at the moment. And also to look at defining and developing processes that would able to take a whole lifecycle software management approach from that initial request, through to deployment, and ultimately through to retirement. We want to undertake a review of the immediate requirements and the future structure of the people needed to deliver the service, making sure they’re appropriately skilled and trained. We haven’t been great at that in the past. When projects tend to go from project to business as usual, they get lobbed over the fence and we just hope for the best. And then we also wanted to make sure we were having effective and timely communication with all of our stakeholder groups. So that is what we actually set out to do.
But it’s not just about the technology, as I’m sure you would have picked up from the previous slide. That said, to date, the lion’s share of the work since we kicked off has mostly been about the technology, and the applications, and the outward communications. Broadly, we have 6 work streams. They are the technology, which is about the identification, the procurement, and the installation of a solution for our current computing environment. I wish I could the credit for the solution that we chose, Application Jukebox, from Software2, but I can’t because it had already been identified as the best solution to meet our needs before I took over the project. I never doubted it but I had an endgame to convince procurement that this was the way to go. The applications work stream is about ensuring all of the applications are suitable and ready for delivery via our project. And I’ve taken those words straight from the project initiation document, because the audience for that document would not have understood the terms that we use in our department around verification and packaging. The process work stream is, at the moment, focused pretty much on our internal processes to ensure the safe delivery of this project. But next will come work to define the processes to support that transition to business as usual, and the development of the whole lifecycle software management approach, from request, through deployment, out to retirement. For software and licensing, we’ve undertaken an audit of our centrally managed software licenses and we did that as a pre-project activity. We made a case to a fairly high level community to shift the funding for software from faculties to a central pot we could manage. This is a dean principal agreement, pending the review of a wider process. Future activity in this area will be around monitoring and reporting on usage. That will help the faculties to refine their portfolios and hopefully identify software that is aged, no longer valid, and that we can get rid of. For the people elements, we are looking to identify and implement a fit-for-purpose structure and the ongoing development of appropriately skilled people who can support this when it goes into the live environment. In terms of communications, as you’d expect, leading up to go live, there has been a lot of activity in this area. We have presented an overview of the project to our colleagues across our department to make sure they had an awareness and could communicate the basic concepts. We briefed colleagues with an academic liaison, we had a desktop icon designed, we launched a set of webpages, we had a massive poster on digital screen marketing campaign, and also have a pop-up screen on all student computers that has a direct link to the hub and a link to the webpages. And still, we’re being told that’s not enough.
I want to change the focus now and tell you how we’re going to make it happen. The 6 things on this slide are the basic ingredients of the recipe for delivering a successful project for us. Very early in the life of this project, we made quite a bold decision that there would be no plan B. We decided that we weren’t going to take any specialist builds. 2 out of 3 of the faculties we’ve deemed as low risk, and one, the science and engineering, medium to high, as this was by far the biggest portfolio. I think I probably should make it clear that we weren’t just ignoring the risk, crossing our fingers, and hoping that it didn’t happen. We just acknowledged it, understood it, and we felt like we could manage it. Our approach, then, was everything targeted at a managed Windows build would be attempted first through Application Jukebox, unless the validation stage had deemed this not to be appropriate. For example, if there were license, hardware, or environmental restrictions. Only applications with these restrictions, which otherwise proved unsuitable for virtualization through development or testing, were deployed through other methods, things such as scripted through group policy, sent live through conflict manager, or as an absolute last resort, as a manual install.
So what did we do to make sure we could deliver on plan A? We had some ground rules. Only three, simple, and set out by me in fairly strong terms. We knew we had a mountain to climb, so hard work was inevitable. Everyone understood the role they had and the part they would play in this successful project. Having fun was not going to be optional. Yes, there were difficult times, but generally our time together, including meetings, always had some light hearted moments and it really helped my project team to bond. The third and most important was being able to speak freely. This was really about trust. The team needed to feel that issues could be raised, challenged to process could be made, conversations could be open and honest, and people would not be blamed. This was a team effort and we were all in together. We will fail or succeed as a team. The feedback from the team has been largely positive. They worked incredibly hard, they’ve shown great belief in the project and in themselves, and they’ve remained positive, even when I’ve had little wobbles about the enormity of what we were trying to achieve.
We also had a sound set of processes that have been delivered over time from previous builds, through from that initial requirement gathering, the development to image deployment. So we weren’t really starting from scratch. We tested these by mapping out steps that each piece of software would have to follow from when it was requested, right through to when it went live. The steps we identified as follows. We had that initial validation step, so looking at the license and technical issues. Have we got any restrictions? Can we manage them? Is it technically feasible to do it this way? Then they were assigned to the relevant work stream, so a majority were on the managed Windows build and the majority went through to the AppJ work stream. Then we did some initial testing and development with the packages. They created the recipe, they built the package, they did some basic checking to see whether it would open, close, and save—things like that. Assuming it would pass that stage, it then went over to quality assurance and our desktop engineers handled that area. They did some more rigorous testing. They wanted to make sure there weren’t any issues with our existing environment, it wasn’t breaking roaming profiles, things like that. If there were issues, it went back to the packaging team and they started again. If there weren’t any issues, it went forward to formal academic testing and our idea of that was to get the academics to focus on the functional operation of the app and how they would be using it in the classroom. Finally, it went through deployment, which is when we were sending stuff live, making sure we were sending it to the right groups of people, depending on what our license restrictions were. Then were the buckets. We used the bucket term quite a lot. We had an operational lead who ensured those activities move through the buckets and that the buckets were neither too empty or too full. He’ll be forever known as the bucket manager, whether he likes it or not. That role also ensured the effective transition of applications between different work streams as necessary. In the main, those steps worked across all the different deployment work streams. We wanted something that would give us a simple view to where we were at any given point in those processes. It would have to be simple because otherwise people would have worked around it and we would have lost our definitive view of progress in the application lifecycle and where anything was at any given time. We wanted something that worked around the idea of these virtual buckets, so we sat down and scribbled down some requirements in not very much detail. We handed it over to one of our internal developers and said, “Can you build this?” About a day later, our AppDB tool was brought on and that’s what you can see a screenshot of at the moment. It might not look very pretty, but by using a combination of text boxes, we were able to get a view of where applications were, in terms of development testing or deployment and we could drill down into each of the applications to see additional information about any issues we’d encountered, whether a testing session had been booked, what the outcome was, and in effect, this became our progress bible. If it wasn’t logged in here, it wasn’t through. And everybody used it really well.
For a technology project, our use of whiteboards might seem a bit archaic but our experience is they are really invaluable. They allowed us to capture things in the moment, they helped us to visualize complex issues and to break them down step by step, and we also had a big one in our work area that helped us to manage and prioritize our issues, and to see who was dealing with what. If we were doing this over again, they would still play a really significant part.
For a number of years, the summer build claimed squatter’s rights on a space that usually operates, in term time, as a Mac desk where all the Mac computers are in the faculty of Science and Engineering. So we do this for a number of reasons. We normally work in an open plan environment and it simply isn’t suitable because the team would be constantly being interrupted and there just isn’t enough space. So we’ve used this space for a number of years and it has proved to be an ideal environment in terms of its size, its location, we can secure it, so we can leave things there without worrying about them going missing. The only thing that’s not ideal is the temperature in the summer can get a little bit warm in there but it’s fixable. We were able to have defined areas for the different buckets, so the packages had an area, the QA testing had an area, and we also had enough space to set up a really comprehensive representation of our varied, and in some cases, quite vintage PC estate. Fundamentally, it meant we had the right people in the right place to enable effective communications, collaborative working, and to have room for our multiple whiteboards. It was also home to an extensive supply of sweets, chocolates, and cakes, which were also a key ingredient to contributing to the output of the team. I did try and introduce some fruit once, but only the once.
In terms of teamwork, we were a relatively small but perfectly formed project team, given the size of the task at hand, which also included the complementary work required to deliver this year’s teaching portfolio in time for the start of teaching, so we weren’t just doing one set of work. Our project team, at its biggest, consisted of one inexperienced project manager—that’s me and I’ve never project managed anything before but I had been asked to take it on. It was a little bit scary but I have enjoyed it. We had one relatively experienced operational lead. You’ll recall from my earlier slide that our virtual buckets and alternative deployment routes and this role managed those and really did bring everything together. We had a tech lead who had an involvement in a small scale pilot the preceding academic year with a small number of academic staff and at the project inception he was one of the people with any experience in the packaging process. We had four people on our packaging team, two internal staff that we seconded from one of the support teams and they were joined by two external packages that were provided by Software2. In reality, we only ever had 3 people that were packaging full-time, as the fourth post was really used to oversee that process and to deal with any of the resorting issues so the packagers could be left to deal with the packages. Our two externals, Matt and Andy, had a difficult job because they were coming into a team where everyone knew each of them pretty well and Andy had the additional challenge of working off site most of the time. They were brilliant and they were both really valued members of our team. We had three desktop engineers, each with knowledge of the three main portfolios, and they undertook the quality assurance testing, they assisted with issue resolution, and one has become our infrastructure and provisioning specialist. They also undertook license server work and led on the other deployment options. We had a SYS Admin role and that person, she did data verification, management of the external testing process, she did all the liaison of the academic staff, oversaw the effective running of the operation on project meetings. And I would say don’t underestimate those activities because they are crucial and take a lot of time, so having a person dedicated to that was an enormous help. We had an asset manager who was responsible for software costing, procurement, licensing, and making sure that the packagers actually had access to the media when they needed it. And the start, in the early days of the project, we also had an academic account management, who would have been primarily responsible for faculty liaison and the requirements gathering. But she wasn’t with us for very long because her little girl decided to put on a very early appearance, so she left on maternity leave. And in that core group, we had cross department contributions from a number individuals and teams, including our service desk. That has been really helpful in getting them to have an understanding of the setup and it stood them in really good stead for doing the support during the early life support days we’re in now. We didn’t always agree but we did have open and frank discussions about the most appropriate way forward, without people thinking they had lost in the compromises needed.
It was essential that we had effective and appropriate project team communications because Applications Anywhere was just one part of the work needed to deliver a whole lot of software for teaching. The other elements that were buried on at the same time were a build for the Faculty of Science and Engineering, which was a light build that contained some virtual Linux machines. We did a Linux physical build, we did a build for our networking lab, we did a build for the games lab, and we also did a refresh build for the Mac lab, as well as developing the other deployment methods when we could put something through Application Jukebox. We had weekly operational meetings that were chaired by the operational lead, and anybody involved in any aspect of the summer build attended those meetings. They were a great opportunity to discuss and resolve any outstanding issues. The scope was set to be 7 days inside of the meeting and the meetings had a very operational focus. Each meeting afforded an opportunity for participants to provide updates and highlight any upcoming absences. We also had weekly project meetings attended by me, the operational lead, the packaging lead, the asset manager, and the SYS Admin role. Those meetings focused on the activity that we needed to do to keep the project moving forward, or if we needed to focus on a single issue that really needed detail thrashing out. In addition to those, we had daily 15-minute catch-up meetings, very informal, sat around the whiteboard to see where they were at, and Andy, the external packager always joined those meetings by Skype. We also had a number of meetings throughout the first phase that were initiated by Software2, and we used those as general catch-up and progress update meetings, making sure we could address any issues and also reassured us we had really good support from them, which I think has helped us strengthen and develop the partnership between us.
We’ve had a couple of heart-stopping moments, but that’s probably to be expected with a project of this size and complexity. We did pause a priority service desk 1 call because we weren’t aware of the limitation with active directory federated services, in terms of the number of groups people could belong to. But it was identified and resolved pretty quickly, within a couple of hours, and there was no longstanding disruption to service, so we got away with that one. Upgrading the Hub two weeks into go live, starting at 2pm on a Friday evening, is not ideal in anyone’s book, but we had a reasonably high degree of confidence that we could do the upgrade without breaking service and we had really strong assurances from Software2 that we wouldn’t make things any worse. We didn’t actually fix the issue but the upgrade did address some early functional shortcomings identified by our users, so we were able to pick up the improved search facility, which was really welcome, and some general cosmetic improvements. Also, it gives us the ability to categorize our applications rather than just having a long list. On another morning, it took us all of 60 minutes to break another 360 applications when we were trying to stop some information from displaying in the hub, however, people do say that if you’re going to fail, fail fast, so we could put a positive spin even on that one.
In terms of challenges and lessons, there are four. However long you think it will take you to navigate the rats nest that is educational procurement, I suggest you double it and then add some because our procurement process held us up a lot, even Tony, who is usually a cool customer was getting twitchy that we wouldn’t be able to deliver, if we didn’t get the contract sorted out but we did it. You don’t know what you’re going to need. We weren’t able to do as much academic testing as we would have liked because the availability of the academics over the summer period is not as great as it could be. It’s a perennial problem, but especially tricky this year, given how we’re deploying a vast majority of the software. I think for us, this should get better as we move forward to get away from this only once-per-year activity peak, to a more phased and evenly spread approach. Early on in the project, we identified a major risk in the form of our aging PC estates, which has not kept pace with increasing software requirements. We also had a significant of thin clients, and we identified those through testing to being unsuitable for a significant part of our portfolio. Some mitigation was achieved by securing funding, which was used to replace a number of the thin clients and replace them with full fat pieces. And that enabled us to throw some more resources as the remaining thin client estates, so that was a double win. It also enabled a conversation with the director of finance about the early release of funds to replace the oldest parts of the PC estate, so there’s good stuff coming out of this. We had some compatibility issues that didn’t show up during testing. One was between an application called Endloader and it wouldn’t work properly with a core element of the Faculty of Science and Engineering core build, which was called NetSupport School. We had no idea this was going to be a problem. It took us quite a while to actually find out what the issue was, and it turned out that we could solve it by swapping USB keyboards for PS2 keyboards and it all works beautifully now. Another one is we’ve also taken the decision to remove Microsoft Project and Visio from Application Jukebox, due to some Microsoft updates that was resorting in some occurrences of broken Office, but again, it was fairly random, there doesn’t seem to be a pattern, and it only affects some users, so it was something that we just didn’t manage to throw up in testing.
Has it been successful? Well, we’ve been working just shy of 4 months and we’ve deployed over 350 applications. That’s about 82% of our Windows applications that we’re now delivering via Application Jukebox. We had a small project team with a short timescale over the summer holiday period. For the first time in, I can’t remember how many, years, we had no one working overtime. I think we’ve come through this initial phase without breaking anyone. Yes, I think that probably constitutes success. What’s next? Well, we’re now planning the next phases, which include identifying a solution to deliver a similar equitable service for our whole community. An extra piece of work that’s come at us is the deployment of approximately 50 enabling technologies. That was outside of the original scope of the project but it’s necessary due to the changes to the disabled student allowance that the government has announced quite recently. That brings me to the end of what I wanted to share with you. I wanted to say thank you for listening.